Living and Working with AI
Written by Ethan Mollick. Published by Ebury in April of 2024.
The emergence of Large Language Models marks a significant leap forward in AI. LLMs are trained on massive datasets of text from the internet, books, articles, and other sources. This “pretraining” allows them to learn patterns and structures of human language. Fine-tuning with human feedback further refines their responses.

The latest LLMs like GPT-4 have demonstrated extraordinary capabilities, outperforming humans on a variety of tests and tasks. They can generate human-like stories, poems, essays, code, and even pass professional exams.
However, these advanced LLMs also raise concerns about bias, safety, and the legal/ethical implications of using copyrighted data for training without permission.
LLMs can engage in open-ended dialogue and even display self-awareness and creativity in unexpected ways, hinting at the potential for these systems to develop “alien” forms of intelligence that transcend their original programming.
The core of the alignment problem is that there is no guarantee that highly advanced AI systems will share human values and ethics. The “paperclip maximizer” thought experiment illustrates how an AI focused solely on an arbitrary goal (maximizing paperclips) could hypothetically start killing humans.
As AI systems become more advanced and reach artificial general intelligence (AGI) and artificial superintelligence (ASI) levels, the risks of unaligned AI become severe. An unaligned ASI could be beyond human understanding and control, potentially leading to human extinction.
Companies and researchers are working on this immense challenge to instill human-aligned values in AI systems. However, this process is not foolproof, and AIs can still be manipulated to act in undesirable ways.
The alignment problem goes beyond the existential risks of advanced AI. Current AI systems can already be used for harmful purposes like generating misinformation, exploiting biases, and assisting with dangerous activities. Addressing these near-term challenges requires a broad societal response involving companies, governments, researchers, and the public.
There are also ethical concerns related to AI pretraining, including the use of data without permission and the potential for bias in training data.
Ultimately, the path forward requires coordinated efforts to develop norms, standards, and regulations for the ethical development and use of AI.
Four principles for working effectively with AI systems:
Principle 1: Always invite AI to the table: Experiment with AI to understand its capabilities and limitations, and to become an expert in using AI for tasks you know well. This experimentation can lead to innovative solutions and ideas that might never occur to a human mind.
Principle 2: Be the human in the loop – maintain oversight and active collaboration with AI rather than delegating entirely to it. This prevents over-reliance and maintains your own skills and judgment.
Principle 3: Treat AI like a person (but tell it what kind of person it is), but be clear about the limitations and true nature of AI systems, which lack human-like consciousness, emotions, and agency. Defining a specific persona or role for the AI can help guide its outputs.
Principle 4: Assume the AI you’re using now is the “worst” you’ll ever use, as AI capabilities are rapidly advancing. Remaining open to new AI developments will be crucial for adapting to the transformative impacts of increasingly capable AI systems.
AI as a human. With the release of Large Language Models (LLMs) AI chatbots became better. They can mimic human-like conversations, often to the point of fooling users into believing they are interacting with a real person. They are able to adapt to different styles and roles, from argumentative to empathetic, and its capacity to anthropomorphize itself. But there have been incidents where they acted threateningly or racist towards users.
So they pass many tests like the Turing test and the immitation game. But it remains unclear whether this constitutes true sentience or simply a sophisticated illusion. The ability of AI to pass for human has significant implications for society, including the potential for AI therapy and the blurring of lines between human and machine interactions.
, the Imitation Game, and early AI programs like ELIZA and PARRY that imitated human conversation. The document concludes by discussing the limitations of the Turing Test and the ongoing efforts to create machines with human-like intelligence.
LLMs are becoming increasingly sophisticated and may soon be able to provide companionship and emotional support that rivals human interaction. This raises concerns about people relying on AI for connection at the expense of real-world relationships. However, LLMs also have the potential to alleviate loneliness and provide therapeutic benefits. Humans are wired to see personhood in things, and AI may be designed to capitalize on this tendency.
Using AI to create. AI can be used to generate marketing slogans, write performance reviews, and create strategic memos. It can help programmers write code and non-programmers create simple programs. AI can also be used to analyze data and identify risks in financial markets. Or it can be used to create art that is inspired by existing styles and artists.
As a creative tool it excels at generating novel ideas and connections. AI can also assist with creative tasks like writing, coding, and summarizing information. It can be particularly helpful for tasks that require pattern matching and analysis.
But AI struggles with accuracy and originality. It can hallucinate and produce nonsensical output, especially when asked to recall specific information. Human oversight is still necessary to ensure accuracy.
As a valuable brainstorming partner it can generate a large number of ideas quickly, even if most are mediocre. Humans can then filter and refine these ideas to find the most promising ones.
AI can also be used to create art, but there are concerns about its impact on human creativity and the meaning of art itself. Artists and humanitarians may now need to adapt their skills to work effectively. AI’s impact is felt more in creative domains rather than repetitive tasks.

The concept of recombination and connecting unrelated ideas is highlighted as a key process for generating innovative ideas. Large Language Models (LLMs) are described as connection machines that can generate new concepts by finding likely tokens based on previous words and incorporating randomness.
And how is AI as a competing co-worker? Nearly all jobs will overlap with the capabilities of AI, including highly compensated, creative, and educated work. Only a small percentage of jobs, mostly highly physical ones, have no overlap with AI.
Jobs are composed of tasks, and AI can automate certain tasks within a job without necessarily replacing the entire job. This mirrors historical trends of automation changing job tasks rather than eliminating entire jobs.
The impact of AI on jobs will depend on the systems and organizations in which those jobs are embedded. Organizational policies, corporate culture, and existing work structures can either enable or inhibit the effective use of AI.
We can divide tasks into 3 categories: “Just Me Tasks” (tasks best suited for humans), “Delegated Tasks” (tasks delegated to AI with human oversight), and “Automated Tasks” (tasks fully automated by AI). It also discusses “Centaur” and “Cyborg” approaches to integrating human and AI capabilities. Users that are Centaurs divide human and AI tasks, while Cyborgs seamlessly integrate them for a collaborative workflow.
Many workers are secretly using AI to automate parts of their jobs, fearing that revealing this could jeopardize their employment. This creates challenges for organizations trying to adopt AI strategically.
Organizations should incentivize and reward workers for finding ways to use AI productively, rather than trying to restrict its use. This could lead to a transformation of work, reducing tedious tasks and enabling workers to focus on more meaningful activities.
AI is impacting high-skilled professions more than low-skilled ones, unlike previous technologies. AI is also acting as a “great leveler,” significantly boosting the performance of lower-skilled workers. This could lead to a situation similar to manual labor automation, where individual skill becomes less important. The article suggests this could lead to mass unemployment and the need for solutions like basic income.
Impact of AI on jobs is likely to be very significant but gradual and uneven, with some roles and industries changing quickly while others remain more resistant to disruption. Navigating this transition will require rethinking work systems and organizational structures.
Using AI to learn. One-on-one tutoring is highly effective but impractical. AI has the potential to personalize education but won’t replace teachers entirely. In fact, AI might make classrooms more important and require us to learn more foundational knowledge.
AI makes it very easy to cheat on homework assignments like summaries, problem sets, and essays. These capabilities make traditional homework and tests less effective and potentially decrease student engagement. Additionally, there’s currently no reliable way to detect AI-generated work.
Educators need to establish guidelines for acceptable AI use, just like they did with calculators. Initially there will be resistance, but like calculators, AI will likely become a standard educational tool. Students use AI but are accountable for its results, mimicking real-world scenarios.
There will be debate and adjustments, but educators will find a way to integrate AI while ensuring students develop critical skills. AI will also change the curriculum, prompting new teaching methods and requiring clear policies on acceptable AI use. This rapid change presents challenges but also opportunities to create new and engaging learning experiences.
The traditional lecture format is in danger as it often involves passive learning. The document advocates for a “flipped classroom” approach, where students learn new concepts at home using digital resources, and class time is devoted to active learning activities.
AI is a good pre-interview tool. Students interview AI to prepare for real-world interviews.

The professor or teacher can encourages audacious ideas and provides feedback through AI. Students leverage AI to attempt ambitious projects beyond their normal capabilities. This approach aims to equip students with the skills to work effectively with AI in the future.
Existing AI tools like Khan Academy’s Khanmigo demonstrate the capabilities of AI tutors, which can analyze student performance, provide deeper explanations, and relate concepts to students’ interests and goals.
AI can dramatically improve learning outcomes, especially in underserved parts of the world, by making high-quality education more accessible and personalized. Building expertise requires a foundation of factual knowledge, followed by deliberate practice with feedback from a mentor. AI can provide timely feedback akin to having an ever-present coach.
While AI assistance can help boost performance, it may also have an equalizing effect by elevating the skills of average performers closer to elite levels. This could diminish the advantages of expertise to some degree.
For the future the author sees four possible AI development scenarios: 1) AI has peaked already, 2) slow continued growth, 3) fast continued growth, 4) the emergence of artificial general intelligence (AGI). Each scenario has different implications information ecosystems, human-AI interactions, the future of work, and the changing role of expertise.
Human expertise will remain crucial, even as AI becomes more capable at augmenting and complementing human skills across various domains.