Start thinking big about AI, can it help build a new civilization?
AI will be far more disruptive than making fintech and banking better.
👉TAKEAWAYS: This is really a list of things to ponder to help you think big about AI….
🔹 GenAI models consistently match or outperform median human capabilities across an expanding array of tasks and will be increasingly coupled with various other systems.
🔹 All types of corporate intelligence in all industries will be impacted substantively, yet most companies appear unready to face the changes.
🔹 Compute providers emerge as the primary beneficiaries of the GenAI revolution, although, perhaps surprisingly, open source still plays a pivotal role in AI model development. (Why NVIDIA stock is going through the roof.)
🔹 Immediate challenges arise from AI’s inherent limitations and unparalleled capabilities. (Hallucinations and wrong answers.)
🔹 While GenAI’s quality and scalability, and its potential evolution to artificial general intelligence (AGI), are the most critical uncertainties, regulation emerges as a more immediate concern. (Regulations are healthy except when they ensure big tech’s monopoly.)
🔹 The emergence of AGI would lead to radical change in our civilization, and growing consensus suggests it will happen sooner than anticipated.
🔹 Setting aside the debate over AGI, GenAI and LLMs are central to a sweeping transformation; GenAI may lead to a new “civilization of cognitive labor.”
👊STRAIGHT TALK👊
I think we spend too much time worrying about the use of AI in finance and fintech!
Don’t get me wrong, I love fintech, but we might be constraining our shared vision of AI's disruptive potential.
Today's article asks a far bigger question: Can we make a new civilization of cognitive labor?
Big enough for you?
To make the scale of disruption even clearer, look at the headlines of two articles that caught my eye today and got me thinking.
First, the announcement that China will mass-produce humanoid robots within 2 years. (here)
Second, a shocker about algorithms deciding who gets organ transplants in the UK. If that isn’t scary enough, the algorithm was flawed and government agencies wouldn’t address the issue. (here)
If these two news stories are combined with this report, robots will soon be making life-and-death decisions, just like in science fiction movies. Is this a dystopia or merely a practical solution?
I’m excited by our AI future, but I’m not sure I will enjoy my role as a dog to a robot!
Thoughts?
"about algorithms deciding who gets organ transplants in the UK"
This is about the same as the "algorithms" called "models" are predicting the "climate change" on the base of CO2 in the atmosphere. Even though every person familiar with informatics knows: "Garbage in, garbage out", those models are effectively fooling elected decision makers to do the biding of those who were setting the guidelines for such models. The main characteristic of such models is, to be too complex in order to be understood by others. So they can hide the intention. It is not the model or algorithm who makes decisions. Those who design the algorithms are the one who are making the decisions. The algorithm is there in order to hide the fact who actually takes decisions and in order to cement that procedure and to automatize and multiply it. We do not need to fear "robots". We need to fear the power mongers who decide what kind of "robot" is being produced.
Guns don't kill people. Bad humans kill people with their guns.
Your quotes below only show that we have changes gradually then suddenly Bankruptcy or Autonomy transferred to systems ( Bureaucracy, Governments) and the agents ( machines, Algorithms as new age procedures, manuals, guidelines).
First, the announcement that China will mass-produce humanoid robots within 2 years. (here)
Second, a shocker about algorithms deciding who gets organ transplants in the UK. If that isn’t scary enough, the algorithm was flawed and government agencies wouldn’t address the issue. (here)