Artificial Intelligence and the Post Truth Era

Nebfanatic

New member





Artificial intelligence continues to become more advanced and with that come many questions. One in particular I'd like to focus on is AI generated video and its impact on people's perception of the truth going forward. This technology is rapidly advancing, and soon will be completely indistinguishable from a video captured of real life. What does this mean for the future of video information online? What can we do to curb the effects this may have on misinformation going forward? 

 
32 minutes ago, Nebfanatic said:

What can we do?  The answer is nothing.

I know that some emails that are made with the help of AI will actually have "Created with the help of AI"

So that might be something that is used but that is about it.

 
I am not a fan of this at all and I don’t understand why we want this developed. 
I use it all the time for work.

I can take my notes and have it made into a google slide presentation in seconds.  I can take a summary and have it turned into a quiz/test or worksheet in seconds.  

It is the biggest time saver that I have ever seen in education.  

I can literally ask it to give me a detailed summary of a chapter of a book that I can then use to create notes.  

 
I use it all the time for work.

I can take my notes and have it made into a google slide presentation in seconds.  I can take a summary and have it turned into a quiz/test or worksheet in seconds.  

It is the biggest time saver that I have ever seen in education.  

I can literally ask it to give me a detailed summary of a chapter of a book that I can then use to create notes.  
That sort of stuff I understand…..sort of.  I don’t like the fake video type stuff. 

 
Last edited by a moderator:
What can we do?  The answer is nothing.
We could destroy the massive and environmentally harmful data farms these things run on. 

I am not a fan of this at all and I don’t understand why we want this developed. 
Its going to be great for things like video games but ultimately this technology has too much potential for abuse imo. You can make anyone seem like they are doing or saying anything soon enough and it will look completely real. It may get tough to authenticate video evidence in court even. 

 
You can make anyone seem like they are doing or saying anything soon enough and it will look completely real. It may get tough to authenticate video evidence in court even. 
This is the crux of the problem and it goes beyond the ability to determine fact from fiction. Yes, it’s horrible that video evidence in court will likely be indeterminable  but the bigger problem, imo, is that it will provide even more endless opportunity for people to believe whatever the hell they want and for bad actors to prey on it.
 

We’ve already experienced people existing in alternate realities based on tampered video and pictures. As an example, Trump wholeheartedly believing a literal “MS13” was tattooed on that guy’s fingers when it was just a cheaply photoshopped version of what the symbols stood for. Now apply AI and make that impossible to debunk and you’ll have endless people being manipulated and happily having any reality they want verified to their satisfaction. It’s already impossible to reach these people with truth and it will only get worse. There are bad actors lined up for miles waiting to further manipulate people. They won’t have a prayer to suss it out for themselves.

It’s much worse than an “unfortunate byproduct”, it’s basically the end of any fact based reality.

 
Last edited by a moderator:
This is the crux of the problem and it goes beyond the ability to determine fact from fiction. Yes, it’s horrible that video evidence in court will likely be indeterminable  but the bigger problem, imo, is that it will provide even more endless opportunity for people to believe whatever the hell they want and for bad actors to prey on it.
 

We’ve already experienced people existing in alternate realities based on tampered video and pictures. As an example, Trump wholeheartedly believing a literal “MS13” was tattooed on that guy’s fingers when it was just a cheaply photoshopped version of what the symbols stood for. Now apply AI and make that impossible to debunk and you’ll have endless people being manipulated and happily having any reality they want verified to their satisfaction. It’s already impossible to reach these people with truth and it will only get worse. There are bad actors lined up for miles waiting to further manipulate people. They won’t have a prayer to suss it out for themselves.

It’s much worse than an “unfortunate byproduct”, it’s basically the end of any fact based reality.
I hate to take this “there”, but can a democracy exist in that environment?  I don’t see how.  
 

 
I was not alive when the calculator became something we used all the time but I am guessing there was a lot of "Now no one will learn math" gripes, and all that has happened is we have built more amazing plans, cars, roads, buildings and so on, all while using math.

I remember when the internet came out and we were told things like "Well, you think you will always just be able to look things up on your computer, you NEED to know how to use the Dewey Decimal System!"

I was not around when the first remote controls came out but I am guessing there was a lot of "What...are you that lazy that you can't get up and change the station???"

I think it is fair to argue that "things were better back in ______" And somethings were better.  I can also remember my mom having to hurry to the bank on Friday to get money out for the weekend because it was have cash or write a check.  There were no ATMs.

There will be negatives with AI and there will be positives.  The intent of AI is not to hurt, just like the intent of the car is not to hurt, but there will be things that get hurt but for the most part things will be net-positive.  

 
Last edited by a moderator:
Back
Top