More to say later but because it seems apt, GPT-4 just wrote this piece of micro fiction for me based on a piece of my own: — Baden
The onboard AI "improves" your photos as a standard. If that is taken to the extreme and it starts to "beautify" people without them knowing it, we might see a breakdown of the sense of external identity. A new type of disorder in which people won't recognize their own reflection in the mirror because they look different everywhere else and people who haven't seen them in a while, other than online, will have this dissonance when they meet up with them, as their faces won't match their presence online. — Christoffer
How's that any different from make-up? — Isaac
What happens when the majority of photos being taken use AI to manipulate people's faces? What happens when such manipulation starts to reshape or beautify aspects of someone's face that essentially reconstructs their actual look, even if it's barely noticeable? — Christoffer
How would this affect facial recognition as a means of security? If someone steals your mobile phone, could they then use AI to access it, by fooling the facial recognition security software?What happens when the majority of photos being taken use AI to manipulate people's faces? — Christoffer
Another way of looking at this is that language (or the core pattern creation and manipulation power therein) has "escaped" into technology which offers it more avenues for expression and proliferation. — Baden
Everyone should watch that video. — Baden
Insightful and important video, I think a must watch. — Wayfarer
So, the question "how is it different form make-up?" bears on your question about how it will impact society. — Isaac
What did you think about the opening point of 50% of all current AI experts think there is currently a 10% chance of AI making humans extinct? — universeness
could they then use AI to access it, by fooling the facial recognition security software? — universeness
Are there any counter-measures, currently being developed, as this AI Gollum class, gets released all over the world? — universeness
What did people think of the prediction of 2024, as the last election? — universeness
But I just see AI produce more extreme versions of the problems we already have. The major one being distrust of truths, facts, and experts. — Christoffer
So how about an AI attached to a 3D printer, producing a 3D mask, the perp could paint and wear? :scream:Facial recognition requires a 3D depth scan and other factors in order to work so I'm not sure it would change that fact — Christoffer
And today we don't really have any laws for AI in similar situations. Imagine getting bots to play "friends" with people on Twitter and in Facebook groups. An AI analyzes a debate and automatically creates arguments to sway the consensus. Or to be nice to people, and slowly turn their opinions towards a certain candidate. — Christoffer
What good is democracy if you can just reprogram people toward what you want them to vote? It's essentially the death of democracy as we see it today if this happens. — Christoffer
I watched about 20 min of the video. So I cannot speak for the whole of it. But until that point I could not see anything that refers to an inherent danger of AI itself. — Alkis Piskas
Make-up is however part of the whole experience, they see the same "made-up" face in the mirror every day and there's the physical act of applying and being part of the transformation. But a filter that isn't even known to the user, i.e something working underneath the experience of taking photos and is officially just "part of the AI processing of the images to make them look better", can have a psychological effect on the user since it's not by choice. — Christoffer
Would people be so easily fooled however, if they know this is happening. Surely we would come up with a counter measure, once we know it's happening. — universeness
If AI can learn to understand what our brain is 'thinking' then wow.......... wtf? — universeness
Really? This is a hidden feature not openly declared? — Isaac
If the NPU detects a face, for example, the ISP ensures all the components in an image are perfectly captured by calling up settings tailored for portrait photography.
So, I would say it's safer to be in the EU when AI systems hit since the EU more often than not is early in installing safety laws compared to the rest of the world. But since all of this is different around the globe, some nations will have AI systems that just have no restrictions, and that can spread if not watched carefully, much easier than human-controlled algorithms right now. — Christoffer
Imagine an actual lie detector that is accurate. It would change the entire practice of law. If you could basically just tap into the memory of a suspect and crosscheck that with witnesses, then you could, in theory, skip an entire trial and just check if the suspect actually did it or not. — Christoffer
It seemed to me that what Tristan and Asa were warning about has little or no current legislation that would protect us from it's deployment by nefarious characters, only interested in profiteering. — universeness
In my own teaching of Computing Science, we even taught secondary school pupils the importance of initial test methodologies, such as the DMZ (De-Militarised Zone) method of testing software to see what affects it would have before it was even involved in any kind of live trial. — universeness
But surely if AI becomes capable of such ability, then such would not be introduced before protection against such possible results as the 'thought police' (Orwell's 1984) or the pre-crime dystopian idea dramatised in the film 'Minority report,' etc, is established. — universeness
In one sense, it's great if AI can help catch criminals, and Tristan and Asa did talk about some of the advantages that this Gollum class of current AI will bring — universeness
I'm not sure I share your level of concern though (I'm more inclined to think people will just come to terms with it), but I see how one might be more concerned. — Isaac
This ship has sailed and government will be too slow to act. Delete social media, ignore marketing and read a book to manage your own sanity. — Benkei
It's with topics like this that the EU is actually one of the best political actors globally. Almost all societal problems that arise out of new technology have been quickly monitored and legislated by the EU to prevent harm. Even so, you are correct that it's still too slow in regards to AI. — Christoffer
Well, I took early retirement from teaching Computing Science 4 years ago.Nice to speak with someone who's teaching on this topic. — Christoffer
I agree that is broadly, what is required but Tristan and Asa seem to be suggesting, that such precaution, is just not happening and with all due respect to @Benkei, et al, some folks have already given up the fight!The positive traits of new technology are easily drawn out on a whiteboard, but figuring out the potential risks and dangers can be abstract, biased, and utterly wrong if not done with careful consideration of a wide range of scientific and political areas. There has to be a cocktail effect incorporating, psychology, sociology, political philosophy, moral philosophy, economy, military technology, and technological evaluation of a system's possible functions. — Christoffer
These things are hard to extrapolate. It almost requires a fictional writer to make up potential scenarios, but more based on the actual facts within the areas listed. — Christoffer
This is what I meant by the debate often polarizing the different sides into stereotypical extremes of either super-positive or super-negative, for and against AI, but never accepting AI as a reality and still working on mitigating the negatives. — Christoffer
I agree, especially if most of our politicians are not fully aware of the clear and present dangers described in the video. Perhaps it's time for us all to write to/email, the national politician that represents the region we each live in, and ask them to watch the video! We all have to try to be part of the solutions.That's the place where society needs to be right now, publicly, academically, politically, and morally. The public needs to understand AI much faster than they are right now, for their own sake in terms of work and safety, as well as to protect their own nation against a breakdown of democracy and societal functions. — Christoffer
The UN would probably ban such tech and these nations will be pariah states, but I don't think we could change the fact that it could happen and probably will happen somewhere. — Christoffer
An AI scan of Trump's thoughts may become a moment of important scientific discovery, as I think the result would be the first AI, that digitally throws up!That's something that might improve. Like, just put the scan on Trump and you have years of material to charge him for. — Christoffer
I agree that is broadly, what is required but Tristan and Asa seem to be suggesting, that such precaution, is just not happening and with all due respect to Benkei, et al, some folks have already given up the fight! — universeness
So do you agree that this is an outcome that we must all refuse to accept?Since this is a money maker, laws will aim at maximising profit first at the detriment of protection for people. — Benkei
The problem with AI (and also with genetically modified organisms and nanotech) is its potential to not be local at all when we mess up. With all our previous technologies we have been able to use them (hiroshima, nagasaki and the tests) or make mistakes (anything from Denver Flats to Fukushima to Chernobyl) and have these be local, if enormous effects. If we make a serious boo boo with the newer technologies, we stand a chance of the effects going everywhere on the planet and potentially affecting every single human (and members of other species, in fact perhaps through them us). And we have always made boo boos with technologies. So, yes, control its use, make sure people are educated. But then, we've done that with other technologies and made boo boos. Right now much of the government oversight of industry (in the US for example) is owned by industry. There is revolving door between industry and oversight. There is financing of the oversight by industry (with the FDA for example). The industries have incredible control over media and government, by paying for the former and lobbying the latter and campaign finance.So, in my opinion, the dilemma is not about AI. It's about our will and ability 1) to educate people appropriately and 2) to control its use, i.e. use it in a really productive, reliable, and responsible way. — Alkis Piskas
I thought that you would mention that. But the atomic bombing at Nagasaki was like an experiment. A bad one of course. But we saw its horrible effects and haven't tried again. Yet, during the whole Cold War period l remember we were were saying that it only takes a crazy, insane person to "press the button" It would need much more than that, of course, but still the danger was visible. And it still is today, esp. when more countries with atomic weapons have entered the scene since then.The problem with AI (and also with genetically modified organisms and nanotech) is its potential to not be local at all when we mess up. With all our previous technologies we have been able to use them (hiroshima, nagasaki and the tests) or make mistakes (anything from Denver Flats to Fukushima to Chernobyl) and have these be local, if enormous effects — Bylaw
There are a lot of different lkinds of "boo boos" that we can make that are existential threats, which are much more visible and realistic than AI's potential dangers.If we make a serious boo boo with the newer technologies, we stand a chance of the effects going everywhere on the planet and potentially affecting every single human — Bylaw
Right.Right now much of the government oversight of industry (in the US for example) is owned by industry. ... The industries have incredible control over media and government, by paying for the former and lobbying the latter and campaign finance. — Bylaw
Yes, so far we haven't gone to a full exchange of nukes or any tactical use of nukes. But these wouldn't be mistakes. They would be conscious choices. My point bringing in nuke use was that even nukes, as long as they are single instances, or single leaks or catastrophies, are still local not global. Chernobyl would have be partly global, if the worst case scenario hadn't been offest by some extremely brave tech and scientists who were ingenious and paid with their lives. But in general, stillI thought that you would mention that. But the atomic bombing at
Nagasaki was like an experiment. A bad one of course. But we saw its horrible effects and haven't tried again. Yet, during the whole Cold War period l remember we were were saying that it only takes a crazy, insane person to "press the button" It would need much more than that, of course, but still the danger was visible. And it still is today, esp. when more countries with atomic weapons have entered the scene since then. — Alkis Piskas
Stephen Hawking and Elon Musk et al did not rely on sci-fi.There are a lot of different lkinds of "boo boos" that we can make that are existential threats, which are much more visible and realistic than AI's potential dangers.
Indeed, a lot of people are talking or asking about potential dangers in AI tehchnology. Yet, I have never heard about a realistic, practical example of such a denger. — Alkis Piskas
But it is precisely humans that need to control that which they do not control. My point is that we have not controlled technology previously and have had serious accidents consistantly. Fortunately the devastatation has been local. With our new technologies mistakes can be global. There may be no learning curve possible.Dangers created by humans can be always controlled and prevented. It's all a question of will, responsbility and choice. — Alkis Piskas
Yes, I know what you said and meant. But we cannot know how "non-local" these incidents can be, i.e. how much "global" can the go.My point bringing in nuke use was that even nukes, as long as they are single instances, or single leaks or catastrophies, are still local not global. — Bylaw
Of course not. He was a scientist and he is a technology expert, resp/ly. The opposite happens: sci-fi people take their ideas from science and technology and inflate, misrepresent and make them sound menacing or fascinating, for profit.Stephen Hawking and Elon Musk et al did not rely on sci-fi. — Bylaw
Yes, I know.My point is that we have not controlled technology previously and have had serious accidents consistantly. — Bylaw
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.