Better Information From AI or Wikipedia?

Welcome to Wondercafe2!

A community where we discuss, share, and have some fun together. Join today and become a part of it!

There are folk that believe that all information is a lie and trust nothing to the point they will not question nothing ... it propagates into hubris ... profane in the darker lights ... ayes squint ... there is some literature that declares we should question all things and alternate things also!

What's alternate to a thing? Freudian vaporizations ... old flames ... Smokey Moue in the shade? Loaded questions may be wise ...
 
How does AI differ from Wikipedia?
Is one better than the other?
Wikipedia is still edited by humans. The good and bad of it is that it is a crowd sourced encyclopedia meaning anyone can sign up to write and articles for it. There are editors who check for issues and questions and put flags on articles where there are concerns so someone can fix them. A common one is lack of citations. Technical and scientific articles do tend to get experts writing and checking them. It is more controversial political topics that have problems with biased factions editing back and forth to slant it to their side.

I would take it over a lot of AI output still. At least the editorial proocess means things that are dubious are often flagged. Though the Gemini summaries in Google have become a fair to decent as long as you treat them as a starting point, not a final answer. They have links to their sources embedded and you still have the full Google search to follow up on.
 
Last edited:
Wikipedia is still edited by humans. The good and bad of it is that it is a crowd sourced encyclopedia meaning anyone can sign up to write and articles for it. There are editors who check for issues and questions and put flags on articles where there are concerns so someone can fix them. A common one is lack of citations. Technical and scientific articles do tend to get experts writing and checking them. It is more controversial political topics that have problems with biased factions editing back and forth to slant it to their side.

I would still take it over a lot of AI output still. At least the editorial proocess means things that are dubious are often flagged. Though the Gemini summaries in Google have become a fair to decent as long as you treat them as a starting point, not a final answer. They have links to their sources embedded and you still have the full Google search to follow up on.

Resolved: that only nothing can be perfect and that is difficult as it is easily corrupted by diffusion ... thus we must keep the ways open ... scuppers? Aye ...
 
They are quite different.

AI is an active tool -- such as when rolled into chatgpt. It's interactive. It is like you have asked a bunch of people to go and hunt information for you, and bring it back. You pay attention to the response, and some of it will be better than others. In general it does good research, and you have to know when to use it and how to vett it -- common sense, and cross -referencing, if it is important

Wikipedia is like a static book, yes with updates, but, more like an encyclopedia. Remember when there were encyclopedias on our book shelves at home, and the current ones were at the library, but, even they were out of date. wikipedia is crowd sourced, as Mendalla shared.
 
Wikipedia is still edited by humans. The good and bad of it is that it is a crowd sourced encyclopedia meaning anyone can sign up to write and articles for it. There are editors who check for issues and questions and put flags on articles where there are concerns so someone can fix them. A common one is lack of citations. Technical and scientific articles do tend to get experts writing and checking them. It is more controversial political topics that have problems with biased factions editing back and forth to slant it to their side.

I would take it over a lot of AI output still. At least the editorial proocess means things that are dubious are often flagged. Though the Gemini summaries in Google have become a fair to decent as long as you treat them as a starting point, not a final answer. They have links to their sources embedded and you still have the full Google search to follow up on.
I appreciate that answer because AI seems to come up a bit short some time and also very slanted in the bias area possibly because it picks up on algorithms that you include in our searches and discussions? Am I saying that right? Just sems AI picks up even on our thoughts from our computer input to create an answer sometimes. Am I right about that? ( I'm not a tech wizard)
Wikipedia doesn't do that and I've read that sometimes AI "hallucinates" an answer that sounds plausible and factual but is actually false information.
 
Last edited:
Just sems AI picks up even on our thoughts from our computer input to create an answer sometimes. Am I right about that? ( I'm not a tech wizard)
There is research suggesting LLMs have a tendency to give people what they are looking for. So if you ask about what the Bible says about, say, homosexuality, and it can see from the wording, past chats and your Internet history that you are looking to validate a negative or positive view, it will skew the way you want instead of giving a neutral answer. Obviously, that's less of an issue if you are asking factual questions and keep the tone of your prompts neutral but it is a cognitive bias that it is prone to, just as we are.
 
There is research suggesting LLMs have a tendency to give people what they are looking for. So if you ask about what the Bible says about, say, homosexuality, and it can see from the wording, past chats and your Internet history that you are looking to validate a negative or positive view, it will skew the way you want instead of giving a neutral answer. Obviously, that's less of an issue if you are asking factual questions and keep the tone of your prompts neutral but it is a cognitive bias that it is prone to, just as we are.
Browser searches will do the same. Of course, AI plays a part in most, as it sorts what of the zillion items might be the right one to show you, for your keyword searches.
 
Again, AI is a tool. Wikipedia is a source.

It's like comparing apples and oranges.

AI is built into many many things, in various forms, and the quality is really dependent on the tool
Wikipedia is a specific source.

If you want a single source to verify a simple item -- sure, use wikipedia
If you want to understand something, and explore it, then, many of the AI tools, can help you
 
Again, AI is a tool. Wikipedia is a source.

It's like comparing apples and oranges.

AI is built into many many things, in various forms, and the quality is really dependent on the tool
Wikipedia is a specific source.

If you want a single source to verify a simple item -- sure, use wikipedia
If you want to understand something, and explore it, then, many of the AI tools, can help you
Hmmm, I was thinking it was the other way around....interesting
 
There is research suggesting LLMs have a tendency to give people what they are looking for. So if you ask about what the Bible says about, say, homosexuality, and it can see from the wording, past chats and your Internet history that you are looking to validate a negative or positive view, it will skew the way you want instead of giving a neutral answer. Obviously, that's less of an issue if you are asking factual questions and keep the tone of your prompts neutral but it is a cognitive bias that it is prone to, just as we are.

What will be ... will be willed upon you ... as you wished usually (as a norm). The sale and the market is primary and intelligence as secondary is buried ... rather sacred ... thus we all go down --- some dark poet!

Then in an abstract, dark world that we reside in ... what's norm? I say dark because of the mystery of what we know of virtue for sure ...

It is certainly a gross bit of chaos ... rackets?
 
Back
Top