1. Blog
  2. Technology
  3. It’s Not All about AI: Companion Software for Building AI Solutions
Technology

It’s Not All about AI: Companion Software for Building AI Solutions

LLMs are all the rage, but there is an emerging ecosystem around it that opens the door to new ways to build AI solutions.

Nate Dow

By Nate Dow

Solutions Architect Nate Dow helps BairesDev teams provide the highest quality of software delivery and products with creative business solutions.

15 min read

Featured image

Saying that AI has turned the world upside down is nothing short of an understatement. For better or worse, the release of ChatGPT opened the floodgates to a wider audience. Nowadays, you find folks talking about AI just like you’d find them talking about politics, their hobbies, or their latest trends. Yes, yes, we all know that there is more to AI than NLP and generative transformers, but we can’t deny that ChatGPT (and Bard, and Bing) is AI’s killer app in the eyes of the users.

OpenAI has opened GPT-4 to their API for all users, Meta has released Llama2 under one of the most comprehensive not-quite-open but still really flexible licenses on the market, Huggingface is seeing more users than ever, and Microsoft is doing everything in its power to position Azure as the one true solution for AI apps. And that’s just the tip of the iceberg. We are living in a time ripe with opportunities for established businesses and startups alike to tinker with AI and integrate it with their products.

But where to start? Generative AI is not a magic box. It’s not like it can write itself into a full-functioning app (yet!). Neither can you drop a short script making an API call to a server to generate something and call it a day. That might’ve worked a couple of years ago when chatbots could be branded as “companion apps”.  And even then, most of these apps needed to have a certain level of complexity in order to add context, memory, and other factors to make a conversation viable.

Today we are going to talk about companion software, solutions that have been growing out of the AI craze that can help everyone, from senior developers to no-code enthusiasts, build AI solutions that fit their needs and their projects. But before we jump headfirst, we should talk about a key figure called Jerome Bruner.

Narrative Psychology as a Framework for AI

While the nature of this article is exploring and understanding what tools we have at our disposal to build better AI solutions, we think it is important to start by thinking about how we are going to use these tools.

To put it bluntly, AI is not a solution by itself. You need to have a clear goal of what you want to achieve, and more importantly, you need to have a tech stack that is able to manage the scope and intent of your project.

So, what does psychology have to do with AI? A lot, actually. For one, AI labs like OpenAI use principles of behavioral psychology to teach their machines; reinforcement learning is an applied technology that owes plenty to the massive body of work of B.F. Skinner.

But today, we’re not going to talk about the principles of behaviorism. That would be a topic unto itself. No. Today we are going to talk about cognitive psychology, a field that owes so much to computer science and, in turn, has a lot to teach us in terms of processing information in new ways.

For the first half of the 20th century, psychology was focused on the observation and analysis of behaviors. There was little interest in what was happening inside the mind. First, because psychology was trying to distance itself from philosophy and its approach to the mind, and second, because we didn’t have the tools to measure how the brain processes information.

So, psychology turned to computer science and information theory in search of solutions. Just like how an algorithm models a process without accurately relaying how a CPU handles information, psychologists created models that explained how the mind works without having to explain what was happening in terms of brain cells.

In a way, it was a fantastic solution for the problem that had plagued American psychology, but it came at the cost of biases that plague the field to this day. Algorithms and models tend to favor linear and logical processes, and for better or worse, humans aren’t rational; therefore, models tend to break down when we take into account complex behaviors and ideas.

Jerome Bruner is one of the cognitive psychologists who felt disillusioned with the less-than-ideal results the field was getting, and as a response, published in the 1970s a fantastic book called Actual Minds, Possible Worlds. In it, he would set the groundwork for a new theory of the mind based on language and narrative.

His ideas encourage us to think critically about how AI should work too. Thanks to these new language models, we’ve seen the rise of autonomous agents like BabyAGI—computer software with a semblance of “inner dialogue” in tandem with short-term and long-term memory. The AI can plan, prioritize, execute, and evaluate.

Yes, the underlying instructions are still zeroes and ones—it’s a computer program, after all—but we could say something similar about brain activity. After all, what’s underneath our thoughts but the electrical activity in our brains (at least in part)?

Should AI purely rely on algorithms and data patterns, or must it also “understand” stories and contexts? One notable example is AI in healthcare, where understanding a patient’s background story—composed of lifestyle choices, family history, and cultural practices—is as critical to conclusive diagnoses as interpreting medical data.

Bruner also champions cognitive flexibility, implying possible worlds—or realities—that can be shaped through changes in our mental models. Hence, when working with AI, this flexibility suggests employing multiple modeling techniques catering to diverse scenarios rather than sticking rigidly to a one-size-fits-all approach.

Consider weather forecasting: Although we generally use regression models based on historical data for predicting future conditions, we might need different models prioritizing real-time satellite imagery over obsolete historical records during adverse situations like cyclones or floods.

Furthermore, Bruner presents ambiguity tolerance as an intrinsic human attribute—a reminder for us working with AI to design these systems to be resilient against uncertain data streams instead of just accurate ones.

Autonomous vehicles perfectly illustrate this principle: While driving under clear weather conditions could be managed by precise sensors and map databases, navigating foggy mornings demands greater tolerance for ambiguous visual information—an alternative strategy altogether!

Last but not less important, how cultural factors influence cognition provides a fascinating insight, inviting us toward creating more culturally sensitive AI tools. Companies like Google have already started embracing this idea; their translation software now considers informal slang alongside official linguistic rules while interpreting languages.

But wouldn’t solutions that work like humans be less logical, less accurate, and more prone to errors? Well, yes, but that’s a feature, not a bug. Many businesses stand to gain from implementing complex human-like agents as part of the service; for example, an artist using generative AI to create images for inspiration can be inspired by a “mistake.”

Just think how many times you’ve been surprised by someone’s ingenuity—for example, how kids create fantastic histories and art because they are less prone to be weighed down by common sense. We wouldn’t use a hallucinating AI to make high-risk investments, but a writer looking for someone to bounce ideas with couldn’t ask for anything better.

So, with these ideas in mind, let’s talk about tools and how they can help us create human-like agents.

Langchain

We could write one million articles about Langchain, and it wouldn’t be enough to even scratch the surface of this extremely powerful framework. To quote their website: “LangChain is a framework for developing applications powered by language models.”

If building an AI-powered app from scratch is the equivalent of constructing a building with bricks and concrete, using langchain is like assembling a building using Lego blocks. And while that analogy may sound a tad restrictive at first, don’t forget that folks have managed to build the Death Star out of Legos.

Silly comparisons aside, Langchain features a modular architecture, presenting different modules that are each designed for a specific functionality. For example, the Models module integrates various language models and types, acting as the backbone for any Langchain-based application. A practical example would be the integration of a “BERT” model providing semantic understanding for a Chatbot developed using Langchain.

Another example, the Prompts module, focuses on prompts management and optimization, thereby improving the output’s relevance and precision. For instance, if we were developing a personal assistant application, fine-tuning prompts could potentially improve its conversation skills, providing personally tailored responses to the user.

Going over each module would take too long, so we recommend resources like the official Langchain Documentation and Langchain Tutorials. These hands-on guides are filled with examples that provide valuable insights into working with Langchain and building AI-powered apps from the ground up.

Let’s focus instead on how tools like Langchain can be implemented. As some of you might have already imagined, Langchain works by adding a layer of abstraction. For example, instead of having to make direct API calls to a language model, Langchain handles that hassle for us.

This brings us to the first benefit. Since we are not working directly with either API calls or the libraries for a specific LLM solution, it’s relatively easy to change AI providers (or to mix and match with ease). If we do our job right, changing between LLMs is as easy as changing a single line of code.

This gives you a tremendous amount of flexibility and control over your project. Service providers sometimes change their policies with very little regard to the impact that it might have on their clients—case in point, the Reddit API debacle. If you are tied to a single provider, then you are more vulnerable to sudden changes that might impact your product. Langchain helps you keep your codebase as reusable and future-proof as possible.

Langchain’s power extends beyond its core modules, thanks to its extensive integrations with various language model providers, text embedding model providers, document loaders, and text splitters, among others. For example, Langchain not only boasts a modular system and extensive integrations but also a wide array of real-world applications, from developing autonomous agents that make decisions based on evolving circumstances to creating personal assistants that recall interactions.

The pipeline for Langchain is based on the idea of building chains of actions. For example, if we were to emulate an autonomous agent, we could design several links to build the chain, such as:

  • A module that takes the input from the user and classifies it, then based on this classification, it sends the input to a specialized “brain”
  • A series of specialized modules for each area; for example, custom-trained models for each area of our business to serve as chatbots for users
  • A module that, based on the user’s request, creates a series of steps to achieve its goal
  • A module that prioritizes which step to take first
  • A module that can search the web and scrape web pages to give more accurate results
  • A module that, based on the different answers from each step, builds one final response

Notice that what we are doing here is an extension of what Bruner is telling us about human cognition. In human terms, we are just creating “inner voices” that handle different tasks—just like how you “talk to yourself” when you are analyzing something or how some people talk to themselves when they are thinking.

A use case we find particularly notable is Langchain’s ability to query tabular data. Imagine an application where a user can ask to find all employees in a company database who joined in the last 6 months and live in a particular city. Leveraging Langchain, one can achieve this by using language models to interact and draw insights from structured data.

Langchain is a versatile tool with vast potential for application across various domains. The outset of learning Langchain might seem daunting, but equipped with the right resources and an unquenchable curiosity, you’ll be on the right track. On the plus side, Langchain supports both Python and Javascript, so it’s not like you have to learn a new language from scratch.

Pinecone as an AI solution

What’s Pinecone? Pinecone is a managed vector database designed for machine learning applications. It’s basically like giving our LLM solutions a sweet upgrade—like going from a bike to a fully loaded sports car.

Now, if you have no clue what Pinecone is, don’t worry; you’re not alone. Many find it as perplexing as the crossword puzzles we see in our morning newspapers. But we’re here to iron out those creases of confusion.

Pinecone is a vector database designed specifically for machine learning applications. To simplify, vector databases are a type of software that’s designed to handle heavy, multi-dimensional data often found in AI and machine learning applications.

Imagine you’re shopping online for a new pair of sneakers, and the website is suggesting items based on your searches and preferences. That’s basically an AI feature known distantly as a “recommendation engine.” These engines siphon through tons of data to suggest relevant items. Now, that’s a lot of heavy lifting data-wise! That’s when our main star, Pinecone, springs into action. It indexes high-dimensional data more effectively, making AI platforms like the “recommendation engine” work more efficiently. Neat, right?

Weaving a more technical layer, Pinecone uses data structures known as “Vector Spaces.” Visualize a vast cosmos where every star is a data point. Some of these stars are close neighbors, while others are distant galaxies far, far away. The proximity between the stars determines their relationship or similarity.

Pinecone helps in finding these neighbors in an efficient and accurate way. This ability forms the backbone of algorithms in recommendation systems, search engines, personalization, and anomaly detection, where data point relationships are their bread and butter.

Remember that one time you searched for a cat video and fell into a rabbit hole of cuddly pet videos for the next hour? That’s a typical example of an AI recommendation system at work, enabled by mechanisms like Pinecone.

Our first trick to leveraging Pinecone is to ditch the traditional database solutions and embrace similarity searches. Pinecone uses vector embedding, which allows us to retrieve items based on their relational similarity rather than exact matches. This means our LLM solutions can comprehend context like never before.

Let’s paint a picture here. Imagine having to find a pair of blue shoes in a massive wardrobe. Traditional methods would have us checking every single shoe. Not with Pinecone! It would serve up all the blue shoes we have in seconds.

Now, up next, let’s talk about the scalability that Pinecone offers. Look, we know that scaling machine learning capabilities can be a daunting task. Pinecone, however, lets us easily scale horizontally, making it possible to handle huge volumes of data without sacrificing speed or efficiency.

Also, Pinecone places a tremendous amount of power into our hands with real-time processing. So, instead of waiting for results until the system processes through our usual batch of data (like binge-watching an entire season in one night), we get results fast and in real-time (like live-streaming an exciting sports game).

Lastly, Pinecone’s ease of use shouldn’t be underestimated. We, after all, want to spend our time on groundbreaking ideas, not resolving implementation issues. With its managed service approach, Pinecone takes the complexity out of the equation. As easy as pie, right?

In Acts of Meaning, Bruner explains that humans are natural storytellers. Instead of remembering things like a perfect picture, we instead remember fragments of information and link them together with the same principles that authors use to write stories. Those stories, in turn, are heavily influenced by the ideas and concepts shared by a community.

So, what does that mean for our AI? Simple. Let’s say you like video games and watch videos on the subject on YouTube. Now, a simple recommendation system would just keep throwing video game content at you nonstop. But what if the system detects that you like a specific type of video game video? For example, videos focused on the philosophical underpinning of games.

Suddenly, you see a recommendation for a philosophy channel, but what gives? You’ve never been interested in philosophy, right? Well, you play it and find yourself absorbed by the content. The AI just made a proximity guess that helped you discover an interest you didn’t know you had.

Pinecone lets us store language in a very natural way instead of relying on tables. We can have documents of different conversations and search results, and we use this dynamic search style to find results by proximity.

No, we are not creating consciousness here; what we are trying to say is that we have a fantastic opportunity to rethink how we approach AI as products and services—one that is more human-like, one that brings another form of understanding information that’s not based on logic and reason.

Tools like Langchain and Pinecone are just two examples of how we can build complex systems with relative ease, thanks to the creativity and effort of the community. And as these tools keep growing, you can be sure that the future of AI is going to be extremely bright.

If you enjoyed this, be sure to check out our other AI articles.

Nate Dow

By Nate Dow

As a Solutions Architect, Nate Dow helps BairesDev provide the highest quality software delivery and products by overcoming technical challenges and defining internal teams. His creative approaches help solve clients' business problems with technology.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

Technology - Kanban vs Agile:
Technology

By BairesDev Editorial Team

10 min read

Contact BairesDev
By continuing to use this site, you agree to our cookie policy and privacy policy.