THE BLOG

Antony Slumbers Antony Slumbers

How long have you got?

Antony Slumbers / Midjourney

Meta, Microsoft, Alphabet & Amazon are set to spend $200 BILLION on AI infrastructure THIS YEAR.

The fundamental consequence will be that the price of intelligence will drop.

Leading to wider adoption & more innovation.

Leading to more efficient & effective AI solutions.

Leading to changing competitive dynamics in the tech industry & profound implications for the broader economy.

So best to pile in now, open up the big cheque book, and get ahead of the pack?

Err, no.

Because applying new AI to old operating models has a finite upside.

Electricity did nothing for the productivity of steam powered factories until they were redesigned to accomodate the capabilities of the new technology.

That process took forty years.

With AI you may have two.

Not two until you adopt any AI but two to understand how to really leverage it.

Essentially into three buckets:

1. Tasks to be automated by AI
2. Tasks to be augmented with AI
3. Tasks to remain solely human

Whilst looking for where value, skills, data, product and service will be commoditised and where enhanced by AI.

This has to be done, with the assistance of AI (of the $20 a month variety), BEFORE you go all in and commit to significant expenditure.

Sounds obvious but anecdotally there’s a lot of ‘What’s our AI Strategy - Why haven’t we launched anything yet?’ coming from C suites.

Bottom line is you don’t have much time but not so little you can’t ‘engage brain’ first.

This is as much a change management as a technology issue.

Read More
Antony Slumbers Antony Slumbers

AI and Real Estate - You can Predict the Future

This is a presentation I gave at Future PropTech in 2018 - years before Generative AI was a ‘thing‘. Much of what I forecast was spot on, though has not yet come to pass. But ….. it is coming.

Read More
Antony Slumbers Antony Slumbers

We need to think of LLMs as second brains

Midjourney / Antony Slumbers

We need to think of LLMs as second brains, that we can call upon to help us think better.

So when we work with them, ideally we should be doing so in 'cyborg' mode, where they become part of 'how we do stuff'. Each of us has this army of virtual interns, who know more about just about everything than we do (save for our own specialities), who we can engage with, to push us, challenge us, and generally enable us to perform at a much higher level than we could on our own.

So I was thinking how could one develop a workshop agenda where we picked a topic, invited 20 or so domain experts to take part, utilised the TRIZ 40 inventive principles framework, and co-opted either ChatGPT Pro or Claude 3 Opus (or maybe even both) as partners.

So that is what I did.

For those not familiar with TRIZ it is a systematic approach to innovation and problem solving that was developed by Soviet inventor and scientist Genrich Altshuller and his colleagues starting in 1946. It is based on the principle that the evolution of systems is governed by certain universal laws and patterns. By analysing a vast body of patent data, Altshuller identified these patterns and developed a set of 40 inventive principles that can be applied to solve complex problems across various industries.

I framed the challenge (in this case to ChatGPT Pro) as: 'So let's say I am the owner of a #SpaceasaService Flex Office Brand, based in London but looking to operate through major cities across the EU. And I want you to help me, by using the TRIZ framework, to develop new and innovative ideas I can incorporate into my existing Brand or with which I can build a new Brand. And we have 4 hours to work on this. It would be you, me, and 20 domain experts.'

Obviously the challenge could be anything. As you'll see from the Agenda the workshop would work in the same way regardless.

Attached is the detailed agenda, which specifies what we, us humans, have to do, and what our virtual brain has to do. 

It's a very collaborative, iterative, but structured process that I think would be really interesting to go through. I also suspect it would generate a ton of inspiration.

Have a look and let me know what you think.

Detailed Workshop Agenda for Innovating in #SpaceasaService with TRIZ

Read More
Antony Slumbers Antony Slumbers

10 ‘must do’s’ to thrive in the age of artificial intelligence.

Midjourney / Antony Slumbers

In order to leverage the capabilities of AI, and to develop a competitive edge, whether as an individual a business or government, we need to do the following. Leaning in to these basic principles WILL stand you in good stead. Do they cover everything? No. But these are high level ‘learnings‘ we need in place to act as foundations for everything else.

As an Individual:

1. Embrace lifelong learning: To stay relevant in the age of AI, continually update your skills, knowledge, and understanding of emerging technologies.

2. Develop uniquely human skills: Focus on cultivating creativity, empathy, curiosity, and emotional intelligence, as these are harder for AI to replicate.

3. Be open to collaboration with AI: View AI as a tool to augment your abilities rather than a threat to your job or way of life.

4. Maintain a balanced perspective on AI: Recognise both the potential benefits and risks associated with AI technology.

5. Adapt to new ways of working: Be prepared to work alongside AI systems and adapt to new roles and responsibilities that emerge as a result of AI integration.

6. Advocate for responsible AI development: Support initiatives that prioritise transparency, accountability, and ethical considerations in AI development.

7. Stay informed about AI advancements: Actively seek out information about the latest developments in AI to make informed decisions and participate in discussions about its impact on society.

8. Embrace interdisciplinary thinking: Draw insights from various fields to better understand and navigate the complexities of the AI-driven world.

9. Cultivate a growth mindset: Embrace change, be willing to experiment, and view challenges as opportunities for personal and professional growth in the age of AI.

10. Focus on solving problems that matter: Use AI as a tool to address pressing challenges in your personal life and community, leveraging its potential to create positive change.

As a Business:

1. Invest in AI adoption: Recognise the potential of AI to drive innovation, efficiency, and competitive advantage, and allocate resources accordingly.

2. Develop an AI strategy: Create a comprehensive plan that aligns AI initiatives with overall business objectives and considers the ethical and social implications of AI adoption.

3. Foster a culture of innovation: Encourage experimentation, risk-taking, and continuous learning to stay ahead of the curve in the rapidly evolving AI landscape.

4. Prioritise data management: Invest in robust data collection, storage, and analysis infrastructure to support effective AI implementation.

5. Emphasise human-AI collaboration: Focus on developing AI systems that augment and complement human capabilities rather than replacing them entirely.

6. Reskill and upskill the workforce: Provide training and education opportunities to help employees adapt to new roles and responsibilities in the age of AI.

7. Ensure transparency and accountability: Develop clear guidelines and mechanisms for ensuring the transparency, fairness, and accountability of AI systems.

8. Collaborate with diverse stakeholders: Engage with experts from various fields, policymakers, and the public to address the complex challenges and opportunities presented by AI.

9. Anticipate and mitigate risks: Proactively identify and address potential risks associated with AI adoption, such as job displacement, privacy concerns, and algorithmic bias.

10. Embrace agility and adaptability: Foster an organisational culture that can quickly respond to the rapid pace of change in the AI-driven business landscape.

As a Government:

1. Harness AI for economic growth: Encourage the adoption and integration of AI technologies across industries to boost productivity, efficiency, and innovation, ultimately driving economic growth and prosperity.
2. Invest in AI research and development: Allocate resources to support AI research and development initiatives, fostering a thriving ecosystem of innovation that can generate new opportunities and solutions.
3. Develop AI talent and skills: Invest in education and training programs to develop a highly skilled AI workforce, ensuring that society can fully capitalise on the economic and social benefits of AI.
4. Leverage AI for societal challenges: Prioritise the development and deployment of AI solutions that address pressing societal challenges, such as healthcare, education, climate change, and social inequality.
5. Foster AI entrepreneurship: Encourage and support AI-driven entrepreneurship and startups, creating an environment that nurtures innovation and allows for the rapid scaling of successful AI ventures.
6. Promote public-private partnerships: Facilitate collaborations between government, industry, and academia to accelerate AI development and deployment, ensuring that the benefits of AI are widely distributed across society.
7. Adopt AI in the public sector: Embrace AI technologies in the public sector to improve the efficiency and effectiveness of government services, ultimately benefiting citizens and society as a whole.
8. Establish AI-friendly regulations: Develop a regulatory framework that strikes a balance between promoting AI innovation and ensuring public safety, trust, and ethical considerations.
9. Encourage international collaboration: Foster global cooperation and knowledge sharing in AI development and governance, enabling societies worldwide to collectively harness the benefits of AI while mitigating potential risks.
10. Ensure inclusive AI-driven growth: Implement policies and initiatives that promote equitable access to AI technologies and the benefits they generate, ensuring that no segment of society is left behind in the AI-driven economy.​​​​​​​​​​​​​​​​

Wouldn’t that be nice!

How many of these can you tick off?

Read More
Antony Slumbers Antony Slumbers

Three Ways to Incorporate Your Own Data When Working with Generative AI

Midjourney / Antony Slumbers

A quick guide for business people.

Generative AI, via the likes of Open AI's GPT-4, Google's Gemini and Anthropic's Claude, is revolutionising the way businesses interact with and utilise artificial intelligence. These models can generate human-like text, assist with various tasks, and provide intelligent responses to user queries. However, to truly harness the power of generative AI for your business, it will often be the case that you wish to incorporate your own data into the process. In this article, we'll explore three methods to integrate your company's knowledge with generative AI and discuss the pros and cons of each approach.

Method 1: Uploading Documents within the Context of a Question

The simplest way to incorporate your own data when working with generative AI is to upload relevant documents within the context of a question. This approach involves providing the AI with specific excerpts or documents that contain the information needed to answer a given query. By doing so, you can ensure that the AI has access to the most relevant and up-to-date information from your company's knowledge base.

Pros:

  • Easy to implement and doesn't require any additional technical setup

  • Allows for targeted information sharing based on the specific question

  • Provides a straightforward way to control the information the AI uses to generate responses

Cons:

  • Limited by the size of the AI's context window, which may not accommodate larger documents or extensive knowledge bases

  • Requires manual effort to select and upload relevant documents for each question

  • The AI may still generate hallucinations if the provided context is incomplete or if it makes assumptions beyond the given information

Method 2: Connecting a ChatGPT Model to an Internal Directory of Documents

Another approach is to connect a model to an internal directory of documents. This method involves setting up a system where the AI can access and retrieve information from a centralised repository of your company's documents, such as reports, presentations, and knowledge base articles. One might do this via creating a 'GPT' via ChatGPT, or by interacting with a models API.

Pros:

  • Enables the AI to access a larger volume of information compared to uploading documents within the context of a question

  • Allows for a more automated and scalable way of incorporating your company's knowledge into the AI's responses

  • Provides a centralised and organised way to manage and update your company's knowledge base

Cons:

  • Requires technical setup and maintenance to connect the AI model to the document directory and ensure smooth information retrieval

  • The relevance of the retrieved information may be limited by traditional keyword-based search techniques

  • The AI may generate hallucinations if the retrieved documents are not entirely relevant or if there are gaps in the information

Method 3: Using a Vector Database and Retrieval-Augmented Generation (RAG) with Claude or A N Other model.

The most advanced and effective way to incorporate your own data when working with generative AI is to use a vector database and Retrieval-Augmented Generation (RAG). This approach involves converting your company's knowledge into high-dimensional vectors and storing them in a vector database. The RAG technique is then used to retrieve the most relevant information from the vector database and integrate it seamlessly with the AI's generation process.

Pros:

  • Offers the most accurate and relevant information retrieval based on semantic similarity

  • Enables a tight integration between the retrieved information and the AI's generation process, minimising the risk of hallucinations

  • Provides the highest level of customisation and fine-tuning to suit your specific business needs

Cons:

  • Requires significant technical expertise and resources to implement and maintain

  • Involves additional costs for setting up and running a vector database

  • Requires careful data preparation, preprocessing, and vectorisation to ensure optimal performance

Choosing the Right Approach

The best approach for incorporating your own data when working with generative AI depends on your business requirements, available resources, and the desired level of customisation and accuracy. If you prioritise ease of implementation and have a limited amount of data to incorporate, uploading documents within the context of a question may be sufficient. If you have a larger knowledge base and want a more automated solution, connecting a ChatGPT model to an internal directory of documents could be a good choice.

However, if your primary goal is to minimise hallucinations, maximise the relevance of the AI's responses, and have the highest level of customisation, using a vector database and RAG with one of the foundational models (GPT, Claude et al) is the most suitable option. This approach ensures that the AI is always grounded in your company's factual information and provides the most accurate and reliable responses.

Regardless of the approach you choose, it's essential to regularly review and update your company's knowledge base, monitor the AI's performance, and have human oversight to identify and correct any potential inaccuracies or hallucinations.

By incorporating your own data into generative AI, you can unlock the full potential of this technology and create AI-powered solutions that are tailored to your business needs.

—————————————-

A Note on Hallucinations:

When it comes to minimising hallucinations and ensuring accurate information (such as with a customer service chatbot), the choice of approach can have a significant impact.

A) Uploading documents within the context of a question:

Hallucination risk: Moderate to High

Explanation: While providing relevant documents within the context can help ground the responses in factual information, there is still a risk of hallucinations if the model makes assumptions or extrapolates beyond the provided context. The limited size of the context window may also lead to incomplete information, increasing the chances of hallucinations.

B) Connecting a ChatGPT model to an internal directory of documents:

Hallucination risk: Moderate

Explanation: By connecting the model to a directory of documents, you can provide a larger volume of information for the model to draw from. This can help reduce hallucinations compared to the first approach. However, the effectiveness of this approach depends on the quality and relevance of the documents retrieved by the keyword-based search. If the retrieved documents are not entirely relevant or if there are gaps in the information, the model may still generate hallucinations.

C) Using a vector database and RAG with Claude:

Hallucination risk: Low

Explanation: The combination of a vector database and RAG offers the best chance of minimising hallucinations. The vector database enables more accurate and relevant information retrieval based on semantic similarity, reducing the chances of missing important context. The RAG technique ensures that the retrieved information is tightly integrated with the generation process, keeping the model grounded in the factual information stored in the vector database. This approach allows for more control over the information the model uses to generate responses, minimising the risk of hallucinations.

Read More