When I first heard about Be My AI—a new collaboration between Open AI and Be My Eyes, an app that connects sighted volunteers with blind people who need help via video call—I didn’t let myself get too excited. Be My AI promised to allow blind people to receive an A.I.–generated description of any photo we uploaded. This was a tantalizing prospect, but it wasn’t the first time a tech company had promised to revolutionize the way people with disabilities access visual content. Microsoft had already given us Seeing AI, which in a very rudimentary way provided a rough idea of what was going on in the images we shared, and which allowed us—again, in a fairly basic way—to interact with information contained in written texts. But the details were missing, and in most cases we could know only that there was a person in the picture and what they were doing, nothing more. Be My AI was different.
Suddenly, I was in a world where nothing was off limits. By simply waving my cellphone, I could hear, with great detail, what my friends were wearing, read street signs and shop prices, analyze a room without having entered it, and indulge in detailed descriptions of the food—one of my great passions—that I was about to eat.
I like to make fun of “AI” – those quotes are there for a reason – but that doesn’t mean it can’t be truly useful. This is a great example of this technology providing a tangible, real, and possibly life-altering benefit to someone with a disability, and that’s just amazing.
My only gripe is that, as the author notes, the images have to be uploaded to the service in order to be analysed. Cynical as I tend to be, this was probably the intent of OpenAI’s executives. A ton of blind people and other people with vision issues will be uploading a lot of private data to be sucked up into the Open AI database, for further “AI” training.
But that’s easy for me to say, and I think blind people and other people with vision issues will argue that’s a sacrifice they’re totally comfortable making, considering that they’re getting in return.
This is true in general for all(?) accessibility features, but what is good for disabled people, are actually good for everyone.
If AI can help blind people communicate, it can also help “functionally blind” people, like those driving, interact with the same content (“see” messages, read web pages, respond to communication, etc).
This was true when we used wheelchair ramps for strollers, subtitles when kids are asleep, or even all public places where TV noise would not be hearable.
So, this is a very welcome (and expected) use of the AI. Over time these “foundational” models which can understand and describe real world information will be a prominent agent for all our digital lives.
sukru,
I agree, AI can be very useful in lots of applications both obvious and novel. It’s problematic to endorse some AI applications and criticize other AI applications when they largely sharing the same data and training requirements. Since same model(s) can be used in many different applications, it’s not tenable to restrict the development of models for “disaggreable” applications without also harming “agreeable” ones.
We covered tools like nightshade recently…
https://www.osnews.com/story/137614/meet-nightshade-the-new-tool-allowing-artists-to-poison-ai-models-with-corrupted-training-data/
IMHO such attempts to break AI training won’t work for long as workarounds are implemented. However, presuming that it did actually work, then it could be just as harmful to agreeable AI applications like accessibility. There will never be a way to force AI to work for only certain applications. The good comes with the bad, and the bad can be quite disturbing…
https://www.cnn.com/2023/10/29/opinions/deepfake-pornography-thriving-business-compton-hamlyn/index.html
There have been attempts to force watermarks on AI output, however this will ultimately be in vein for both technical reasons as well as the inability to police it. They will try of course, but…yeah the technology is truly here to stay. Unfortunately today’s society is not mature or responsible enough to respect the victims of such deepfakes, which makes the problem all the worse. Not only do victims feel embarrassment, they could get fired, their applications discarded, etc. This idea will be controversial as heck, but there is actually a way to make things fair and that is to make sure there are deepfakes of everyone, haha. Obviously people of today will find this idea reprehensible, but if you think about it, it does solve the finger pointing stigma. And maybe people would become less judgemental when everyone’s in the exact same boat.
Alfman,
This is one area every should get behind regulation. However our governments are really slow when it comes to matters of personal dignity. It took them a looong time to criminalize the so called “revenge porn”, and I am worried it will take even longer to do the same for deepfakes.
Large groups like Hollywood writers were able to get concessions by doing collective strikes. But I am not sure dispersed individuals can do the same quickly enough.
Sure – but there is even more about AI that is bad for everyone, including the disabled. War has produced advancements in prosthetics. It also creates a huge market for them, and that’s good for profits. But is it good for humans?
Fresh off the press…
https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
Some of these things seem reasonable enough, like protecting from AI fraud, protecting privacy, etc. These goals are easy to agree with,.but much like the “do not call lists” and laws to protect us from fraudulent scammers they’re basically toothless as well-intentioned as they are. Noble goals doesn’t imply effectiveness in practice.
Ha, the notion that the military and intelligence communities care about AI safety and ethics is humorous. As long as AI helps to complete a mission, they’ll pull all the stops regardless because this is what they’ve always done and no high ranking officials have ever lost their jobs over it. Even the constitution gets overlooked when it gets in the way
This point deserves our attention, because AI could well determine the future of our justice system, even if indirectly.
A frequent gripe of mine is that technology often becomes monopolized by the giants. In the case of AI they have nearly exclusive access to training data the rest of us aren’t privy to. I doubt they’d ever do anything so drastic, but how interesting would it be if the tech giants had to completely open source their training models so that small devs could be on equal footing with them.
Alfman,
I have not read the entire thing yet, only reports and summaries. However my general skepticism about AI regulation seems to fit here.
Ad you mentioned, this is non-sensical. Especially when our rivals like China has been shown to use AI to oppress minorities for a long time:
https://www.bbc.com/news/technology-57101248
It is not unreasonable to see some versions of these software to be available to other states, or other bad actors over time (if not already).
There is a very valid reason to use AI in justice systems: because AI is impartial than most humans subject to primitive urges like hunger. It has been shown that hungry judged behave much more different than ones who had their lunch:
https://en.wikipedia.org/wiki/Hungry_judge_effect
However my concern is while the public will amplify mistakes of AI (which should be rectified of course, not excused), will gladly ignore significantly larger systemic issues when they were done by humans (compare for example self driving vs manual)
(definitely need that edit button back, I’ll raise this as many times as necessary)
AI will be exactly as biased as it’s trained to be. The idea that it can replace humans – ESPECIALLY when it comes to moral judgements, it’s absolutely insane to me. It can do a lot – it’ll never be a replacement for humans. We just need to mature in our understanding of human morality. AI can’t do that for us. We have to do it for them.
CaptainN-,
I would just say “have you seen humans?”
The main difference between ML based judgement and human ones is explainability and hence accountability.
If you ask a judge why their acted unfairly, they will never say “I was hungry and agitated”, especially less likely “I did not like the skin color of the defendant”; whereas the ML will tell exactly why it did a certain decision. From there we can systematically solve either those decisions, or root causes of the problems.
It does not mean of course ML is perfect, far from it. But humans are generally worse for unbiased judgements.
No one can explain why AI makes the decisions it makes. it’s not deterministic.
CaptainN,
That is incorrect.
Machine Learning models, including deep neural networks with billions of parameters are deterministic. They would have a “random seed” parameter, but that too can be replayed. (This is how we “unit test” those models).
If you have not caught up in the recent decades, there is an entire research area on “Explainable AI” with some modern tools that can be adopted to variety of model architectures:
This could be a starting point:
https://www.sciencedirect.com/science/article/pii/S1566253523001148
(Also, as a side benefit, recent “foundational” language models like GPT can be asked themselves directly “why did you make that recommendation?”)
CaptainN-,
It technically is 100% deterministic. But I think what you meant is that it can be very challenging to decode why an AI model acts the way it does in a comprehensive way. While I understand what he was getting at, the way sukru stated this quote in particular seems misleading to me “The main difference between ML based judgement and human ones is explainability and hence accountability.”.
Given the inherent complexity of deep neural nets, the best tools we humans have for analyzing bias in AI models are statistical in nature. Nobody really knows if a NN will show skin color bias without testing it with data it hasn’t seen. What about glasses, clothing, or hair styles? NNs could easily pick up on these small details and find that they have a correlation to trial results.
The best way to ensure the NN doesn’t pick up these biases is to omit the data from NN altogether that could cause bias…no pictures. But things could still leak in and cause subtle biases, like language use, names, defendant descriptions, locations, social status clues. Some of these details may be intertwined with important facts of the case, like clothing worn and vehicle models. Feeding AI facts about the case is very important for deductive reasoning, but it could easily reflect unintentional biases from previous cases.
Just as an example, say AI gets trained on cases where racist judges/witnesses/juries/etc decided cases based on skin color. Even if the AI does not have access to skin color, it can still match on other patterns that fill in for the missing data to reproduce the data bias. Racial discrimination is a big one, but it’s technically possible for AI to pick on on many discriminatory behaviors that have nothing to do with justice.
While the risks of bias are very real, NNs do an excellent job at pattern reproduction, which makes them very consistent with regards to the training data (biases and all). With this in mind, I think AI models could prove useful in exposing case inconsistencies. The AI would do very well at testing hypothetical scenarios. For example, if we tweak some simple variables at the input, like the judge, makeup of the jury pool, etc, how does that statistically affect the case result? This would be a very interesting area of study.
sukru,
Lots of stuff to consider in there. It goes into deep neural networks as non-explainable yet more accurate. This kind of makes sense to me. but given the importance of understand a model, the desire to transform it into a more understanable white box models makes sense too. Although the reason DNN exist in the first place is because they can fit the data better than shallow understandable models. There’s a tradeoff. This is a very interesting topic, I’ll need to take a lot more time reading it. Thank you for linking it!
Alfman,
For deep networks, we have tools to test models for bias using what is called “adversarial learning”. This basically is done by pairing with them with a model that knows about that bias, and penalizes the main model when it is detected:
I could recommend a paper by one of my colleagues:
https://dl.acm.org/doi/10.1145/3278721.3278779
And modern recommendation systems can also be built around explainability. Yes, this means some pre-work has to be done, but it is still possible to get much better reasons (especially compared to humans).
sukru,
I disagree, this doesn’t actually solve the problem. Leaning the results can actually introduce new biases in the same way that affirmative action can.
Alfman,
I would recommend reading that paper (or another resource like: https://developers.google.com/machine-learning/glossary/fairness#equalized_odds)
There are different fairness metrics, and some of them, as you mentioned have side effect that can be undesirable.
The paper focuses on two (demographic parity and equality of odds), but their methods can be applied to others.
Let’s say, we have two demographics
BLUE people, were 20% of them are qualified
GREEN people, where 40% of them are qualified
If both groups apply to the same job with 10 openings, and if you have
5 GREEN and 5 BLUE applicants approved, you have “demographic parity”, if you have the same number of applications from each group.
Whereas other metrics are satisfied when 20% of BLUE and 40% of GREEN are accepted preserving their qualification rations.
So, yes, these tools can be used in different ways. But any policy decision is an external factor here.
sukru,
Yes I understand the ratios and concepts you are referring to, but the approach in correcting DNN bias is fundamentally flawed.
To take the DNN (which we don’t have a clear connection between input and output) and then try and correct the biases on the output side, then we’re implicitly making “corrections” even in cases when the DNN already got it right. The new NN + correction factors might produce averages that look like they add up, but in fact can be unfair in specific individual cases. The real source of data biases occurred before the DNN transformation and not after it. This is where the corrections need to be made, but this is the hard part because we often don’t know where the biases are. The fact that a judge may be racist doesn’t mean that all those he prosecuted were innocent. The purpose of a DNN is in assessing innocent or guilt, but in assessing the true answer it is absolutely imperative that the DNN is trained without bias. It is NOT acceptable just to take a biased DNN and then apply some amount of blanket of bias compensation after the fact. This would be easy to do, but it doesn’t give us fair answers for specific cases.
Alfman
This is actually very different. It tried to correct the mistakes so that when the model mispredicts, it does not do that with a bias.
There is an example with “UCI Adult Dataset”, where they try to predict the income of people (whether it is >$50k or not). So the model, a standard logistic regression, looks like:
predictedIncome = f(features…)
Where features are variables like age, education, occupation, and so on. Ideally we would want to minimize |actualIncome – predictedIncome| in our model.
Their contribution is pairing this with an adversary, that tries to predict a sensitive information (here gender), from looking at mispredictions.
pGender = f2(actualIncome, predictedIncome)
If model is perfect in its predictions, it is easy to see the adversary will be unable to learn anything, and won’t be better than a coin toss.
However if the model has a consistent bias in its errors, then the adversary will be able to infer the gender looking at miscalculated examples.
The new gradient calculation forces the model learn “different” mistakes instead of those who are based on that particular category.
Finally, they found a small decrease in accuracy (84.5% vs 86%), which can be mitigated by increasing model capacity of better feature selection.
Anyway, these tools can be used for positive purposes, and like all other tools, can also be used mischievously.
I need to make a correction
The adversary model can be better than coin toss, because it can learn the base rates
sukru,
I’m not sure that you’re following my point though. It’s like trying to solve an equations with too many unknowns. No matter what kind of black box solver you use, you’re still left with unknowns. You can try guessing at possible solutions, but without more data or assumptions to fill those unknowns there is no mathematical way to derive the right value with certainty.
Without more data, we’re still left with unknowns and there’s no way to deduce an answer that doesn’t contain unknowns. The same is mathematically true with solving for bias in neural nets.
Even with “adversarial training”, you are still implicitly optimizing a goal on the output side rather than correcting the input. This works very well for something like chess because you can determine which black box is best by playing black boxes against each other. However in the case of determining facts or guilt, this strategy doesn’t work. The factors that determines innocence or guilt cannot be mathematically determined by placing one black box against another. The only way to judge whether the black box is performing well is to check it’s output against known good results. In your examples, you conveniently have both the actual income and the predicted incomes, but in real cases the known good answers are missing.
Obviously if we had known good results, we could just train the DNN using them in the first place. However since our data has bias, we can neither rely on it to train an unbiased DNN nor to correct for it using adversarial networks. We can try to account for sloppy input bias as best we can by making assumptions about what the data should look like, but this is inherently biased too. It’s mathematically unsolvable without more data in the exact same way that equations containing unknowns are. I’d like for us to agree that math cannot solve the missing variables problem, it can only give us sets of possibilities. We either need more data to dial in the unknown variables, or we need to pick results that satisfy our expectations even though our expectations can be wrong for specific cases.
DNNs are exceptionally good at outputting plausible possibilities. Chatgpt is a pro at generating plausible output, but this doesn’t mean the generated output is factually right. So I think we need to concede that the underlying math can have ambiguous results that black boxes cannot solve without introducing better/more information.
Yes, I agree that neural networks and AI are immensely useful and have the potential to drastically improve consistency in courts, which I think has been your main point all along. However in terms of the “facts” and “garbage in garbage out” problems, I consider these data problems more so than algorithm problems.
Aflman,
I am not sharing random articles or links. They do actually have good insight on this topic.
The model discussed here does not work that way. It is called logistic regression, and generally one version of gradient descent is used:
https://builtin.com/data-science/gradient-descent
https://medium.com/@hunter-j-phillips/a-simple-introduction-to-gradient-descent-1f32a08b0deb
For any differentiable surface gradient descent is known to be an effective method finding the local minima, and is going to converge after sufficient number of iterations.
I am not sure how this is relevant.
The adversarial model tries to prevent leakage of bias when the underlying model makes mistakes. Those mistakes are compared to training data which has known labels (not guessed, actual labels).
The paper discusses situations where we have gold standard data (census for example and incomes, which are known from banks).
That is true, but GPT is an entirely different kind of network. Here we are looking at regression, which is more apt for decision, than a transformer that is for language understanding.
Though even in language understanding tasks, the exact same thing can be said for humans. In fact, in many human language tasks (or programming competitions) some of these models have already outperformed average humans.
Anyway, machine learning is a good subject, and will shape the future of humanity. There is no need to think it as a “black box” anymore, however we should still be vigilant un keeping our understanding up to date with it.
sukru,
I understand the mechanics of NN training involved, however the point is the algorithms, as interesting or novel as they may be, cannot use deduction to make conclusions of facts existing outside of their black boxs. I don’t have a problem using NN models to make extrapolations over missing portions of the data, but extrapolations and actual fact are two different things. We would be wrong to assume that these extrapolations can’t be wrong. They can be wrong and sometimes are. Without adequate external information, it’s mathematically impossible to unambiguously know when the model is wrong. Making extrapolations about the bias in a courtroom no matter how sophisticated does not prove actual bias in a courtroom. It seems tempting to think that there exists some special NN algorithm that can deduct some truths about the world outside of it’s training data, but that’s not how it works. In computer science terms this is covered by G"odel’s Incompleteness Theorem.
https://plato.stanford.edu/entries/goedel-incompleteness/
For starters, we may not have “actual labels” to use. But assuming we did, what you are trying to do is extrapolate known biases from set A to unknown biases in other sets using neural nets to model these biases. Nothing stops us from doing this and you will get the algorithm to output results, but we still need to be mindful that even if a model is 100% accurate on set A, it doesn’t necessarily translate to 100% accuracy across set B, C, D, etc. We assume the extrapolation is correct, but that is not a fact. Some proportion of cases in sets B/C/D may diverge significantly from the conditions of set A and each other such that your bias correcting NN actually ends up introducing new bias that wasn’t there. We shouldn’t gloss over such details if we’re going to use these DNNs to make decisions impacting people’s lives.
Well, you won’t hear me argue this point, haha. Human judges are notoriously inconsistent. Very often judges directly contradict each other. Getting a favorable judge and jury is 9/10s of the law. Of course I made this up, but I think you’ll follow my meaning
I have yet to get far into your “Explainable AI” link, it’s on my list I feel the no blackbox statement is controversial when talking about DNN. I can see how some of them could be converted to simpler/shallow models that are much easier to follow, but they aren’t necessarily as accurate and might even use different logical path to get there.
sukru,
While we may disagree on some points, please know that I hold you in very high regards here on osnews. I want to say that I do enjoy these discussions with you, thank you for that.
Alfman,
Thanks. As you said, even though we disagree, we also get to learn a lot from each other here.
Ai can be used for one of the following, but not both:
1. Solve world hunger
2. Increase shareholder value
We get to decide which we use it for, and so far, the reactive posture of those who might NOT want to “increase shareholder value” at the expense of solving real problems, like world hunger, are helping the wrong side, through their fear based reactivity. Do better. Stop talking about AI like it’s the end of everything. It’s one of the greatest tools for problem solving humanity has every created. The ones who want to use it to obsolete your job, and kick you to the street, are not afraid – they are winning, and you are helping.
I am not sure why “world hunger” comes up so often. But if you look at its root causes, it is definitely not money, but rather freedom.
(Overlay world hunger index over world freedom index, and the picture should be clear).
The very basic question is: if they are hungry at their current location, why can’t they move to a place where food is plenty. (Even with “frictions” societies like US are made up of huge immigrant populations, If it fair to criticize intake ratio, but there is a possibility. Whereas, most poorer dictatorships will prevent you to leave).
Anyway, that is a very long discussion, once again. please overlay those two maps.
Wrt, “shareholder value”,: what do you think large corporations are trying to do after they saturated the western markets?
Hint: They want to open up “new” markets (hence spread prosperity as a by product)
One such example (“Next Billion Users”):
https://developers.googleblog.com/2022/05/building-better-products-for-new.html
“World hunger” comes up so often because it’s a simple, easy to understand measure of equity. Equity is a hoity toity concept, which requires some thought, and a lot of terms definitions. The fact that we produce enough food to feed every human on the planet, every day, but still have vast swaths of people living in poverty and going hungry, is easy to understand.
You have expressed a clear bias (and something of an extremist, unbalanced position), and if you are the one training AI, you are going to end up with a very biased AI. Specifically, you boiled complex realities in politics down to a reductive “freedom” point of view. Others might argue that order is what we need to solve these problems, and that too much freedom is inefficient. Also very reductive, and if someone were to train their models on that idea, then the model would be biased. Or someone might say that “private ownership, and the freedom to do what you want with what you own” is the way to solve world hunger – which is BTW, VERY close to what you said, and it’s completely wrong – not just incorrect, but also immoral. you have stated an immoral, indefensible (on fact, logic, and morality) extremist position on politics. But it’s not really your fault, it’s what neolibs have been promoting, almost unchallenged, for like a century in the west.
It’s kind of silly really – imagine that organizations, with the sole stated purpose to “make profit” (that is, stuff money into stakeholder’s pockets) is somehow going to solve great societal issues, including in realms that have no profit potential. Honestly, it’s insane. I’m tired of that type of obviously incorrect nonsense. Aren’t you tired of that? Anyway…
AI is not magic, it’s a tool. It can produce leverage, just like other tools – industrialization, IT, etc. – but leverage to accelerate what, that is the important question. It can maximize shareholder value, it can protect property rights (and the “liberty” to do what you want with that thing you claim rights over, without fear of government, yadayada, insanity insanity), or it can solve world hunger. We get to decide how we use the tool.