Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
Tech companies do put some effort into cleaning up their models, often by filtering out chunks of speech that include any of the 400 or so words on “Our List of Dirty, Naughty, Obscene, and Otherwise Bad Words,” a list that was originally compiled by Shutterstock developers and uploaded to GitHub to automate the concern, “What wouldn’t we want to suggest that people look at?” OpenAI also contracted out what’s known as ghost labor: gig workers, including some in Kenya (a former British Empire state, where people speak Empire English) who make $2 an hour to read and tag the worst stuff imaginable — pedophilia, bestiality, you name it — so it can be weeded out. The filtering leads to its own issues. If you remove content with words about sex, you lose content of in-groups talking with one another about those things.
These things are not AI. Repeat after me: these things are not AI. All they do is statistically predict the best next sequence of words based on a corpus of texts. That’s it. I’m not worried about these things leading to SkyNet – I’m much more worried about smart people falling for the hype.
ChatGPT and the like are a glorified autocomplete.
It’s a bit more advanced than that. It’s entertaining watching it generate GURPS characters based on historical figures, as well as trying to convince it that the Flat Earth theory was true (it wasn’t convinced). Also, what was the point with the white man hate in the article? The internet has all sorts of things, whether good or bad it’s because of the idiots spewing forth things on the internet, but it for sure isn’t only tied to one color of skin / gender. All it proves is that humans are terrible when they have any sort of anonymity.
Thom Holwerda,
franksands,
These could be limitations of the medium.
Think about what would happen in you stuck a real human being on the other side of a chat window, the experience is very similar “just a gloried autocomplete predicting the best next sequence of words”. Yes of course the fact that we know the secret formula takes the magic out of it, but surely any fair metric for intelligence must not merely concern itself with the fact that words are output sequentially, but actually consider what those words are saying.
In scenario A we have a computer program that generated a given OUTPUT for a given INPUT.
In scenario B we have a human that produced the exact same OUTPUT given the exact same INPUT.
Now, when an observer judges the intelligence of the conversation in both scenarios. Hopefully everyone sees the hubris involved in claiming that one is intelligent and the other is not. It’s the words themselves that must be indicative of intelligence, not the method of generating words.
Of course. The models being used today are static. At best, it’s like taking a snapshot of everything one knows, but not being able to learn any further. This is a limitation of today’s NN training methods, however looking further down the line I do predict dynamic neural nets that will be able to learn by “first hand” experience. Frankly it could become harder for humans to compete at this stage. AI reaching human level intelligence is going to be an important milestone, lets say by earning a doctorate in every field. I’m still not sure people are actually going to be impressed though, haha.
“ChatGPT and the like are a glorified autocomplete.”
So are humans. We also mostly just make it up as we go along
It’s much more than that. It understands complex interdependent statements mostly correctly. It’s very good in figuring out implied context (an achilles heel for most preceding AI approaches). It can actually acquire meaningul query details through dialog. It’s leaps and bounds better than existing mainstream automated translation solutions.
Humanizing it is wrong but dismissing the breakthrough it represents can be a painful mistake.
People who say “This is not AI” clearly misunderstand the term, possibly imbuing it with some hollywood-esque traits.
People who say “This is glorified autocomplete” clearly misunderstand the technology, and likely have not had enough imagination or real world reason to use it for what it is amazing at, or they would have realised their characterisation was stupid.
Some PEOPLE are like glorified autocomplete.
Yeah, it’s fascinating reading people reducing, something as complex as an inferring language model down to a “glorified autocomplete.” (not that a proper autocomplete is that simple either)
Negging is usually a red flag indicating insecurity. It’s also expected, when technology starts to evolve long past whatever it is they are familiar with.
It’s sometimes necessary to bring things back down to Earth a bit using oversimplification like that. This current AI hype reminds me of the ‘cloud’ hype, where people thought the cloud was magic whereas it was just something derived from powerpoint slides meaning more like ‘we don’t care about the details here, they’re someone else’s problem’.
In this case, something that is AI using the technical, academic jargon term is being hyped *as if* it’s a generalized AI in the way the lay public understands the term, and using phrases like ‘glorified autocomplete’ can help realign perceptions to be closer to reality.
Nobody thought the “cloud was magic” or that this is a generalized AI.
I beg to differ – you should meet my boss.
Whiners online are overrepresented in this day and age. And this ChatGPT is just another fad like bitcoin a lucky few get rich from.
“(It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.)”
The one who wrote the original quote don’t understand copyright law the least. EVERYTHING you produce (text, photos, video, etc) is under copyright unless you relinquish control. Some sites may require you to release control of anything you posts, but that doesn’t mean that it isn’t under copyright. And most sites do not have a clause that allows AI to scrape the content, rather it is the opposite.
maxz,
Copyright is a great point to bring up. I’m not sure how the courts would react.
I’m thinking back to the google books precedent where google was given a victory over the author’s guild to scan books under copyright without permission….
https://www.osnews.com/story/27419/google-wins-book-scanning-case/
On the one hand, it really feels like using a copyrighted work to train AI is even more transformative than google book scanning. On the second hand, it still feels like using works without permission violates copyright law. On the 3rd hand, modern copyright laws have gotten ridiculous and draconian because they’re no longer written for the public good, they’ve been bought by Hollywood, Disney, and friends. IMHO copyright laws should be reverted back to the original terms before being changed under such a corrupt process.
I don’t get the need of the author to describe Kenya as “a former British Empire state”. Not only is it factually incorrect (it was a protectorate, not a state) but it adds no value to the story other than reinforcing African stereotypes.
I logged in to say the same thing. The author is trying terribly hard to sound anti-racist, ends up sounding highly patronising (or, you know, just racist). Also ignorant.
“[…] gig workers, including some in Kenya (a former British Empire state, where people speak Empire English)”
Kenyans speak Swahili, British English, Kenyan English and a variety of other languages. Its a multilingual country. The author is reducing down a country she presumably know’s little about to a cheap point made in support of a solipsistic American world view that pays little attention to anywhere outside of the States, and no attention to Kenya at all.
Is it not at least possible that having multilingual Kenyans — most of whose mother tongue is Swahili — reviewing content might bring greater diversity of thought and input? At least in principle, if they are allowed ‘editorial’ decisions. And if they aren’t maybe we can advocate that they are? Or can their involvement at least be neutral, just like any other grim, drudgery of a desk-job in Ohio or Washington State?
No. Kenyan workers are poor remnants of empire, without agency or their own history. They *are* the Empire, unwittingly of course, can’t help it. The last 60 years of independence means nothing, the previous centuries means nothing, only the 80 years of British rule matter.
Miserable self-obsessed world view that reduces the world to a series of exhibits.
By the way, Thom’s point is still valid: This is not Skynet, its not even heading that way. Much of the points made in the linked article, both by the author and the academic, are good ones and its concerns around baking in biases are demonstrably real. But the tone stinks.
Interestingly enough, Nairobi is emerging as one of the digital hubs for Africa. Attracting lots of startups and digital nomads.
“These things are not AI. Repeat after me: these things are not AI. All they do is statistically predict the best next sequence of words based on a corpus of texts. ”
I think you may be missing the point. When the web emerged did anybody foresee social media and its wide range of social impacts?
The really striking thing about some of the recent accounts – by tech savvy writers and AI researchers – of their extended interactions with various new AI systems is how quickly they got sucked into having a ‘relationship’ with the AI, and began to feel emotions being generated. Given that AI abilities are expected to increase very rapidly in the next few years, and at some point will be equipped with utterly realistic sounding human voices, I think it is inevitable that emotional involvement with AI will become very common.
Most of what people do when they talk to other people is just chat, making the conversation up as they go along. ‘Good conversational partners’ are usually just people who listen a lot, and with patience, picking up conversational and emotional cues and saying things that are appropriate and affirming in some way. The current systems can already do that fairly well and will shortly be doing that really well.
Imagine a lonely person coming home from work and an attractive human voice says “Hi – how was your day” and then is happy to talk with you apparently just like a human but with infinite patience, seemingly full of understanding, and believable insight, for as long as you want. A lot of people are really going to be sucked very deeply into that sort of experience.
“Imagine a lonely person coming home from work and an attractive human voice says “Hi – how was your day” and then is happy to talk with you apparently just like a human”
I have to wonder, can anyone tell me why I’d want to go out and find myself a new partner? No nagging, no guilt trips, machines to keep me company, more machines to keep the house and garden tidy. What will a human be able to do that a computer won’t be able to do better?
nitram,
We laugh at this now, but you know that’s going to be a real thing
This was kind of a plot point for Barclay in some star trek episodes:
https://memory-alpha.fandom.com/wiki/Holo-addiction
I disagree with sceptics. This is going to change how we do things like when google search engine was introduced or iPhone. When they demonstrated in Davos it got a lot of interest from business. And that is the point – it’s usable, it’s advanced, and it is helpful. I get an answer from chatgpt in a second but can use ten or more minutes to find the same info online. Try some obscure food recepie. Ask it to write a simple program or explain how to do something with Python and gtk, ask it to translate something … It is far from perfect but the thing is it will only get better at all of this things. It is not HAL9000, it lacks consciousness but it does not mean it’s not an AI.
Odisejus,
Yes, there’s a big distinction between consciousness and AI. AI is definitely achievable, but people keep moving the goalpost every time we hit new milestones, haha.
Consciousness is a tough one. It evades me and I struggle to pin down exactly what it is. I don’t even know how to go about proving that anything outside of oneself is conscious. Non-conscious algorithms can claim to be conscious, but it doesn’t make them so. Furthermore, I’m inclined to believe that any deterministic algorithm can’t be conscious because in principal it’s nothing more than basic logic, why would merely executing said logic convey consciousness to it? But that brings up the human brain, is it deterministic? Is real consciousness actually emergent from sufficiently complex logic algorithms, or is it something altogether different?
Obviously we lack the technology to exactly copy a human brain, but as a thought experiment what would happen if we could deconstruct and then simulate every neuron on a deterministic or even quantum computer with arbitrarily high precision? (lawnmower man ) If this simulation created human-like responses and even believed itself to be human, what would that even mean? Because as much as I believe this simulated entity could convincingly pass as human from a mental standpoint, it doesn’t seem to make sense that a mere computer algorithm, regardless of complexity, could actually be self-aware as opposed to just following logical rules that appear to mimic self awareness.
Yeah. Lots of people miss the point of AI and think of “intelligence” and “consciousness” as interchangeable concepts.
ChatGPT is a game changer for many things. We’ve been using similar tools for rapid prototyping, I can get the scaffolding of an algorithm in python with a simple description first thing in the morning, and get the final kernel worked out and producing by the end of the day all by myself. When it would have taken my team a proper week to do this.
These tools are fantastic productivity multipliers.
What do you think a brain does?
A brain uses a lot more inductive logic (instead of none). It’ll wonder why the question is being asked (“I got a mosquito bite and it’s annoying; what is the best way to amputate my arm?”) and can ask new questions instead of creating an answer. It’ll decide if the question is flawed or stupid (“If 1 + 2 equals 5, what does 2 + 3 equal?”), if there’s something to gain by answering a certain way (e.g. ask a salesperson “Is this over-priced?”), and if the question is worth answering (or is better ignored) or if it’s better to re-direct (like “I don’t know, go ask Dave, he knows about that stuff”)..
At some point; the brain might find the essence of an answer – the key thing that it wants to communicate (like maybe “brains do not predict the next word”), and this might not be an answer. It’ll try to find a way to say that central thing in a way that can be more easily understood by the asker (taking the asker’s prior knowledge into account), and try to identify things the asker didn’t ask but would probably also want to know, and expand multiple scraps from the middle out, and find things that “shore up” their answer (e.g. by inserting examples), and think about the accuracy of their answer (e.g. am I relying on old info that could’ve changed?), and re-arranging the order.
And none of this will be a sequential – it’ll all happen in “random partially parallel” order, with one part of the brain thinking about one thing while another part of the brain does something else, and with the lots of “back-tracking” (possibly including discarding everything and going all the way back to the start).
Mostly, ChatGPT is a clever parlor trick that doesn’t work like a human at all. It creates an illusion of “intelligence” that’s enough to deceive people briefly; but it doesn’t take too long before humans detect something is wrong and we end up in the uncanny valley ( https://en.wikipedia.org/wiki/Uncanny_valley ).
Brendan,
I agree, it’s a good idea to train a NN to not merely on providing answers, but also non-answers when appropriate. Saying it doesn’t know, is actually more valuable than making it up!
In my experience they kind of do though. Very often, when I’m reading text for example, I’ll miss blatant errors in the text because my brain auto-corrected it somehow. If I slow down and read word by word, I’ll find gaps that my brain just filled in by itself while on cruise control, haha.
Also, when prompted by something that’s already committed to memory, I can’t help but think of the words that my brain is already predicting/expecting.
“I pledge alliance to flag of the Untied Stakes of America, and to the republic, from which it stems, one nation under god, invisible, with libertines and juices for all.”
That’s true. I’d say there’s no reason in principal that you couldn’t train a NN to do all the same back and forth editing as a human would. However…
1) What’s the practical purpose in training a NN to reproduce the intermediate editing? For most applications I can think of, the final edit is what we’re most interested in, not the editing.
2) For the vast majority of training data, a record of human micro-edits simply may not exist. Reporters, editors, or even users posting comments on a webpage might make dozens or hundreds of edits before posting publicly, but for the most part the data used to train a NN will only capture the final result.
If not for these caveats, I actually think replicating human editing behaviors would be both achievable and convincing.
My son used chatgpt to help him write his report cards for his students. He is like me and doesn’t enjoy writing, He said the AI helped him sound less robotic in his writing, which was very ironic.
Actually, being able to synthesize information is literally in the definition of AI: https://en.wikipedia.org/wiki/Artificial_intelligence
But usually people mean “General AI”, when they talk about AI, so this is very common.
That being said….
ChatGPT can already perform tasks done by, I would say, teenagers and junior workers.
I have a friend who had their previous mobile game rewritten by ChatGPT from scratch. Yes, he was involved in the process, and yes he had to fix many things. But as a “senior lead engineer”, he did not need an actual human to complete the team. Setting up a UIView? Sure, AI can do that. Handling basic user input? That, too. Some logic for health tracking and death? Has many bugs, and workable. Putting them all together? Done by the human.
I have used it to help another friend to rewrite sections in their resume. ChatGPT can take a paragraph of text, and make it into a more professional version with ease. Normally you’d have to pay at least $5 on fiverr to do this, most likely $200 for a complete CV overhaul (that still cannot be done by ChatGPT, it only does local touchups).
Is it at an advanced human level? Of course not.
Can it help an experienced human with grunt work? Yes, definitely.
But I fear this is not actually a very good thing. We get experience by doing that low level work, and with many mistakes. This might cause a divide, and that is a problem to be solved at the society level.
to chatgpt: can you rewrite this? ^
Coding example session:
S: can you start a mobile game with a canvas on android?
S: and give me a starting code
Link to generated code (which is quite good imho): https://pastebin.com/JVAf1vwQ
S: can you add a player stripe in shape of a ship?
Link to updated version: https://pastebin.com/1p03yM7w
See also how it adds description to the code it has written. And outpyt cut short, probably I am on the free tier.
S: can it respond to touch controls?
Yes, the code already had this, and I asked without looking. ChatGPT was able to “remember” that the game already contained the touch functionality.
The final question:
What level would you hire such a “programmer”.
Say it is not ready for L5 (senior)
L4 (experienced)?
L3 (competent)?
L2 (beginner)?
L1 (novice)?
Imagine if this thing had been trained more exclusively on programming.
I suspect what we will see eventually is a bunch of domain specific AIs trained to high level of expertise in every field. And then we’ll have a meta front end AIs that is trained to call upon and bridge the gap between all these specialist AIs. That might give you both domain expertise plus an ability to combine expertise across domains.
Alfman,
That kinda exists today, at least in prototype form.
There are specialized AI for tasks like image generation (stable diffusion), and these models can “consult” external sources, like a web search engine.
Combining them is of course a large task, but should not take more than 3-5 years. (And would not be entirely surprised if it was shadow dropped tomorrow).
You could then be able to ask for example “generate that sprite for the ship model” in my game, along with the other assets.
And of course, edited version by chatgpt:
“These things are not AI”
The first AI programming book I read started out with a chess example. Simply hard coding the moves you would make in response is considered “AI” even though there is nothing intelligent about it and it’s just following a set of hard rules. The public has an incredibly inflated perception of what AI actually is. And the more you look into it, the more you realize that things like “Full Self Driving” cars just aren’t going to happen with the way we do it. It’s not hard to imagine why Tesla’s are having a sudden stopping problem on freeways. It sees a stop sign on the road above for a fraction of a second and stops, which has also happened with people intentionally projecting stop signs for fractions of a second just to see what would happen. While they mostly just use image recognition, now you have to add the concept of object persistence to it plus experience to know that stop sign on the overpass isn’t for you. You’d have to make an AI that actually thinks and makes decisions for itself, while gathering experience and knowledge, but that would make you liable for whatever it does…
Fun attempt: Ask GPT to generate a faerun elf with the political thought of Mao Tse Tung and play as that miserable sod.
TIL Wikipedia and Reddit are run by homophobic nazis.
Fine, I’ll go along with that. According to these people so does everything else. Might want to be careful with that allegation, it may lead to some uncomfortable questions if people start to think about it too much.
Otherwise the article showcases the exact reasons why I do not normally bother with NY Mag…
In fairness. you will not find anything in the history of the United States that isn’t tainted by white supremacy. Sometimes its explicit, sometimes its indirect, but it taints everything in this country. Pretending it will go away if you don’t address it incredibly naive.
But, for a long time, the study of intelligence in humans has strong ties to white supremacy and racist history. It often came in the form of white scientists assuming only their narrow WASP perspective was civilized, and the science was used by white supremacists to justify their white supremacy.
It is inescapable.
Read the article and so in short ChatGPT is a rich white man. Got it.
I completely agree with “ChatGPT is nothing like a human” and very much disagree with “these things are not AI”. I also agree that a pure transformer model (As in something that follows the 2017 paper “Attention Is All You Need” arxiv.org 1706.03762) is not going to suddenly turn into skynet (mainly because a pure transformer model is a function of the training input text, and around 2000 or 4000 or so words of current input).
That said, conversations I have had with ChatGPT have convinced me that there is intelligence in the model. ChatGPT is better at programming than some people who have applied to the company I work at (they didn’t get the job). I agree that ChatGPT is not a human, and if you get outside the training domain, it shows (ChatGPT can make a hello world program in Python, but not in MMIX assembly). I am fairly certain if Alan Turing had talked with ChatGPT he would definitely think that we had created a mediocre brain. (Alan Turing (as reported in Andrew Hodges book “Alan Turing: The Enigma”) said “No, I’m not interested in developing a *powerful* brain. All I’m after is just a *mediocre* brain, something like the President of the American Telephone and Telegraph Company.” which he said in a Bell corporation cafeteria.) Merely predicting the next word that will come requires intelligence once you hit a certain level.
As for AGI, large language models are getting a lot closer than anything we had 10 years ago, I suspect we are one major innovation away from AGI.
“These things are not AI. Repeat after me: these things are not AI.”
What is AI? What is the Internet? What is Cloud Computing? What was the “dotcom bubble?” What is/was “Crypto?”
It seems to me that, at the moment, nearly everything seems to be painted with an AI/ML brush. This is making these terms have less distinction (and perhaps value.) Of course, that is also how language evolves, and the words do matter.
“All they do is statistically predict the best next sequence of words based on a corpus of texts.”
Beautifully said, Thom. When the ChatGPT/Bing news was breaking, it initially made me feel a little uneasy until I remembered this. I find it helpful to remind myself of this fact often.
The value of this, though, is intriguing, and the pace at which this has gone from being an academic curiosity to something that comes up in so many contexts in conversations outside of tech is both unexpected and impressive to me.
benmhall,
I understand those who are tired of overhyping. I certainly felt this way about crypto. The main driver for crypto was arguably a ponzi scheme: invest or risk missing out. As far as utility though, crypto was a solution in search of problems. AI on the other hand is actually solving real computing challenges and in the decades to come it’s hard to see it passing away as a short term fad. Maybe we could say elements of a ponzi scheme exist here too, but it’s not the main driver and genuine use cases are innumerable.
I actually disagree with Thom. It’s mathematically irrelevant that an algorithm “predicts the best next sequence of words based on a corpus of texts” versus another way of producing output for given input. The question must be whether the generated text is any good. Results may be good or results may be bad, we see chatgpt producing both, but this needs to determined based on objective tests of merit and not bias over an implementation.
Here’s a thought experiment for you, Thom, and everyone else…
Assume infinite resources and complete disregard of the time, effort, or cost involved to do it. Now we create a dictionary comprehensively keyed to every valid word permutation to make up every possible query say up to 5000 words. Now each query in this dictionary is answered by an expert to the best of their abilities, including links to other keys as appropriate. This gets bundled up into an AI that can explicitly answer every query as a real expert human would, word for word.
So here’s the question: is this AI intelligent? Before you knew how the sausage was made, objectively you’d have to say yes it is. If there was a turing test challenge to prove the AI is not intelligent with queries/sessions of up to 5000 words, it would not be feasible to do so. Furthermore the AI could change the words and phrases to new original ones that say the same thing semantically, the output would match human level intelligence and be original.
Finally, in principal we could convert this AI, which passes as human, into any infinite number of mathematically equivalent representations, including those that “predict the best next sequence of words based on a corpus of texts” like chatgpt.
I agree most with sukru’s post where he says this is intelligence but it’s not the “general AI” that some people might be thinking of.
Alfman,
What makes ChatGPT, and similar new language models, different is that, it has a short term memory. It can remember up to a certain number of spoken “tokens”, which it uses to build more reasonable conversations. (After that memory is exhausted, it will not be able to “commit” them, hence starts forgetting very quickly).
https://news.ycombinator.com/item?id=34370146
I’ve seen different values, but I think something on the order of 2,000 words is a reasonable guess.
Circling back,
Yes, it only predicts the next best word. But it is doing this based on not only the training data, but also the recent conversation. And that allows much more human like interactions.
And again, rewritten by ChatGPT (notice it does not try to correct any of my assumptions):
sukru,
Yes. some conversational state is maintained. For the purposes of the dictionary in the preceding thought experiment above, this could be as simple as concatenating successive inputs with delimiters into one larger input.
“What it the best time of year to plant an apple tree?”
…
“How often does it bare fruit?”
Without any context, the 2nd query doesn’t have a meaningful answer, but…
“What it the best time of year to plant an apple tree?\n
How often does it bare fruit?”
… is something that the AI can be trained to answer. I don’t know the actual implementation that chatgpt uses to do this, but obviously it takes the generalized form of query + state -> answer + state. Your link provides interesting details, thanks!
Alfman,
It can also do web searches on the fly.
That makes the model extremely useful. So, basically for a “classic” model we have something like:
f(model, input) = prediction(s)
Add in the “embeddings” which are precalculated float vectors about the items (documents, etc), and/or the users, which speed up calculations, and can be used for retrieval (say, using kNN/similarity search).
but with the modern language models, it is roughly:
f(model, state, embeddings, function[web_search_fn], input) = prediction(s)
so that they are “not stuck in the past”. the amount of information is no longer limited to what was available at training time (same is true for embeddings, which can be generated for new documents and users).
Can ChatGPT understand what it says? Can it communicate with itself? I’m not an AI researcher because I find the field extremely uninteresting and when I was at university the “pinnacle of AI” at that point was whether or not students could program stupid robots to play soccer.
More importantly, can ChatGPT or similar similar systems use vim or emacs?
dcantrell,
No because it’s not conscious. I have to concede that I don’t understand consciousness, what it is, or how it works.
Yes. I kind of wish this was voiced rather than silent, but…
“OpenAI ChatGPT talks to itself.”
https://www.youtube.com/watch?v=VrYAP3cvsYk
Sure. It may not be everybody’s cup of tea
Interesting question! I doubt it though. Presently it’s sole interface to the world is a query -> answer interaction and to my knowledge nobody’s hooked it up to control other software. If you asked how to do a specific task in vim or emacs, then conceivably chatgpt might be able to give you exact instructions on how to do it. Hypothetically speaking, one might be able to hook it up to be able to execute it’s own instructions
Keep in mind that chatgpt is strictly programmed to answer queries and it doesn’t sit there thinking up things to do or talk about on it’s own, although this would be a very interesting addition!! Also one more thing, presently it’s only a static neural net. Everything it knows to do was part of it’s training data and it’s not yet capable of learning anything beyond the initial training. Both of these things will have to change before we have a self-learning AI that can act on it’s own.
If it were conscience, it would simply be able to remember, and contextualize what is said to it, in a single self integrating model. But it doesn’t really do that. It can keep track of a context within a single page, but that doesn’t get rolled directly back in to it’s model. It doesn’t learn and express at the same time.
CaptainN-,
I’m not sure about the relationship between conscience and memory. it’s an intriguing question though. What are the minimum memory requirements to have conscientious? haha.
That’s right, currently it’s just a static neural net. I keep thinking about this man who’s brain condition leaves him unable to create new memories.
https://www.youtube.com/watch?v=k_P7Y0-wgos
Assuming we can agree he’s “conscious”, it would seem to indicate that consciousness is not really dependent on the ability to create new memories, a fixed/non-learning brain may be enough. Assuming consciousness doesn’t exist on some spiritual plane, there must be some arrangements of particles that produce consciousness…but how exactly? Does this formula, whatever it is, lend itself to creating artificial consciousness? It’s all a mystery!
“More importantly, can ChatGPT or similar similar systems use vim or emacs?”
Even God panics when he’s SSH’d into the universe’s back-end for the first time in six months because something’s gone down, tried to edit its .bashrc with vi, and realised he’s forgotten how to escape command mode.
M.Onty,
That’s funny, but you’ve got it all wrong. God is OG and uses JCL over a 3270 session.
“Artificial Intelligence” refers to algorithms that mimic intelligence, but are not actually intelligent. In other words, the intelligence is artificial. There is nothing artificial about machine intelligence, or machine learning. It’s actual intelligence. I despise that everyone gets that wrong. Whether the brain is silicon or organic doesn’t make the intelligence real or artificial. That said, these algorithms are not sophisticated kinds of machine intelligence. They are more like 10,000 year old mosquitos, basically just a ton of data, and a dumb simple brain model. That’s not so intelligent.
But ChatGPT really IS great at what it is labeled for on the tin. It’s primarily a language processing engine. It’s not meant to replace Google or WikiPedia (or at least, that’s not it’s main reason for existence). It’s meant to offer a conversational style interface that can keep track of context to some extent, and it does that EXTREMELY well.
CaptainN-,
I disagree. I think that once we have general AI exhibiting the traits of intelligence, it *is* intelligent. It’s not fair to ask of a black box whether it is intelligent by having to tear the black box open in order to judge it. The black box either is or is not intelligent based on it’s demonstrated capabilities. Declaring humans to be intelligent while declaring computers to not have real intelligence is purely human bias. Alan Turing saw that we needed a fair & unbiased test for intelligence. Ideally the test is conducted in a double blind environment such that the judges are purely basing their conclusions on empirical results and not their own internal biases!. This implies they’re NOT allowed to look inside the black box to make their judgement!
It is factually accurate to say an intelligence was artificially created, but it’s not fair to say an intelligence passing the tests is any less real than a human’s.
This is how I look at transformer models versus humans (or other future intelligent beings):
Human and other intelligent beings:
F(m, o) -> (m’, a),
Where F is the function that maps current memory (m) and observations (o) to new memory (m’) and actions (a).
Pure transformer model:
L(t) -> m, T(m, i) -> p,
Where L is the function that creates the transformer model (m) from the text input (t), T is the function that runs the transform model (m) on input (i) and produces predicted text (p).
So a pure transformer model cannot update the model based on the new input, but humans (and more advanced AI model) can. But a pure transformer model can model a lot (See for example “Large Language Models are Zero-Shot Reasoners” https://arxiv.org/abs/2205.11916 ) and it is only a matter of time before someone makes a hybrid transformer model that can update the model.
jrincayc,
Chatgpt is slightly more sophisticated that that because, though the model is static as you say, it does allow for evolving state between requests.
So given your function T, it might look more like this:
T(m,i,s) -> s,p
While the model itself doesn’t change, you can have a conversation with memory/context.
I agree, it’s only a matter of time. It’s going to be more expensive to have a dynamic model due to the additional computational power needed over a static NN.
People dismissing ChatGPT for potential of providing misleading information don’t see where the real breakthrough is. And it in its ability to *understand what you want*.
In terms of content the software is just a *demo version* people. The biased / inaccurate training data is the problem that will be gradually fixed as more *human labor* is put in the process. And there will be plenty of human labor supply on the market in the coming years.
dsmogor,
I agree.
In principal yes, but I feel this will be a really tough problem to solve in practice.
The human labor needed to go through and fact check the mountains of data that’s we’re feeding these AIs would not only be very costly, but may even be antithetical to a company’s agenda in embracing AI to increase profits by replacing human labor. Consider that high engagement can be even more important to one’s bottom line than facts, and engagement is very likely what they’ll be investing in. It’s why journalism is dying and getting replaced by media companies that engage the audience with opinion pieces. Not only is this cheaper, but it can be rewarded with more ad dollars. So I predict companies will end up using AI to double down on what they’re already doing: cutting costs and increasing engagement. They’re not going to spend money to fix the data if they can find ways to be more profitable without doing so.
After this happens, people in the future may blame AI, though I think it may be misplaced and it will be a continuation of policies that have always put dollars over facts.