Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Wikipedia is still edited by humans. The good and bad of it is that it is a crowd sourced encyclopedia meaning anyone can sign up to write and articles for it. There are editors who check for issues and questions and put flags on articles where there are concerns so someone can fix them. A common one is lack of citations. Technical and scientific articles do tend to get experts writing and checking them. It is more controversial political topics that have problems with biased factions editing back and forth to slant it to their side.How does AI differ from Wikipedia?
Is one better than the other?
Wikipedia is still edited by humans. The good and bad of it is that it is a crowd sourced encyclopedia meaning anyone can sign up to write and articles for it. There are editors who check for issues and questions and put flags on articles where there are concerns so someone can fix them. A common one is lack of citations. Technical and scientific articles do tend to get experts writing and checking them. It is more controversial political topics that have problems with biased factions editing back and forth to slant it to their side.
I would still take it over a lot of AI output still. At least the editorial proocess means things that are dubious are often flagged. Though the Gemini summaries in Google have become a fair to decent as long as you treat them as a starting point, not a final answer. They have links to their sources embedded and you still have the full Google search to follow up on.
I appreciate that answer because AI seems to come up a bit short some time and also very slanted in the bias area possibly because it picks up on algorithms that you include in our searches and discussions? Am I saying that right? Just sems AI picks up even on our thoughts from our computer input to create an answer sometimes. Am I right about that? ( I'm not a tech wizard)Wikipedia is still edited by humans. The good and bad of it is that it is a crowd sourced encyclopedia meaning anyone can sign up to write and articles for it. There are editors who check for issues and questions and put flags on articles where there are concerns so someone can fix them. A common one is lack of citations. Technical and scientific articles do tend to get experts writing and checking them. It is more controversial political topics that have problems with biased factions editing back and forth to slant it to their side.
I would take it over a lot of AI output still. At least the editorial proocess means things that are dubious are often flagged. Though the Gemini summaries in Google have become a fair to decent as long as you treat them as a starting point, not a final answer. They have links to their sources embedded and you still have the full Google search to follow up on.
There is research suggesting LLMs have a tendency to give people what they are looking for. So if you ask about what the Bible says about, say, homosexuality, and it can see from the wording, past chats and your Internet history that you are looking to validate a negative or positive view, it will skew the way you want instead of giving a neutral answer. Obviously, that's less of an issue if you are asking factual questions and keep the tone of your prompts neutral but it is a cognitive bias that it is prone to, just as we are.Just sems AI picks up even on our thoughts from our computer input to create an answer sometimes. Am I right about that? ( I'm not a tech wizard)
Browser searches will do the same. Of course, AI plays a part in most, as it sorts what of the zillion items might be the right one to show you, for your keyword searches.There is research suggesting LLMs have a tendency to give people what they are looking for. So if you ask about what the Bible says about, say, homosexuality, and it can see from the wording, past chats and your Internet history that you are looking to validate a negative or positive view, it will skew the way you want instead of giving a neutral answer. Obviously, that's less of an issue if you are asking factual questions and keep the tone of your prompts neutral but it is a cognitive bias that it is prone to, just as we are.
Hmmm, I was thinking it was the other way around....interestingAgain, AI is a tool. Wikipedia is a source.
It's like comparing apples and oranges.
AI is built into many many things, in various forms, and the quality is really dependent on the tool
Wikipedia is a specific source.
If you want a single source to verify a simple item -- sure, use wikipedia
If you want to understand something, and explore it, then, many of the AI tools, can help you
There is research suggesting LLMs have a tendency to give people what they are looking for. So if you ask about what the Bible says about, say, homosexuality, and it can see from the wording, past chats and your Internet history that you are looking to validate a negative or positive view, it will skew the way you want instead of giving a neutral answer. Obviously, that's less of an issue if you are asking factual questions and keep the tone of your prompts neutral but it is a cognitive bias that it is prone to, just as we are.