A Year Later: Reflections on conversations with ChatGPT
The surge in new products, and even legacy products with new AI features, reveals a massively popular trend: rewrapping existing generative language models to create summarization chats, AI-generated discussions, or even new content. Often these features, and even whole new products, are designed around a well-designed prompt or set of prompt-based agents.
A year ago, we penned an article just as the buzz around GPT-3 was about to skyrocket. While we enjoyed conversing with GPT-3 on OpenAI’s Playground (months before ChatGPT jumped into the AI scene), we talked about developing concerns and philosophical questions tied to using generative language models.
Fast forward to today, the technological landscape, including our work at Resultid, has greatly advanced. We’re thrilled to look back in time only one year and share with you how we’ve been working to incorporate generative language models into our product.
More than a GPT overlay: How is Resultid different?
The surge in new products, and even legacy products with new AI features, reveals a massively popular trend: rewrapping existing generative language models to create summarization chats, AI-generated discussions, or even new content. Often these features, and even whole new products, are designed around a well-designed prompt or set of prompt-based agents.
Prompt engineering has grown from a niche area of what was previously a smaller area of natural language processing (NLP) into a burgeoning field of techniques and methods building upon different generative language models to get increasingly advanced outputs. The viral success of some of these well-engineered prompts and the products they power show that even minimal repackaging of a generative model can greatly augment existing products and even spawn completely new ones.
What we have built is a toolkit that accelerates and scales up processes of finding insights and analysis from qualitative data. This extends beyond an automatically written summary of a PDF or an AI-penned email to your boss. Our platform processes complicated data into easily digestible findings, succinctly communicated insights, and valuable guidance for future exploration.
Much more than a GPT wrapper, our sophisticated data analysis produces qualitative output that is enhanced by generative models, rather than completely created by them. This approach to integrating generative language models into our systems reduces the risk of cascading errors, hallucinations, and other commonly documented shortcomings of such models. This also helps our tool be more transparent and explainable and allows users to have more control over how their data is used and analyzed.
A scalable tool for the whole team: Rapid Adoption
Utilizing generative language models like GPT is certainly not unique to Resultid. The value creation we have realized is based on our foundational system that sifts through the noise of data efficiently without relying on generative language models for every step. That’s important because we reduce cascading errors and hallucinations caused by these models while increasing efficiency and decreasing the run time of our tools.
Our development efforts are geared towards enabling users, teams, and entire organizations to automate their existing workflows using our tools, accelerating their current processes while opening up more time for new ones. Many users express frustrations with the amount of qualitative data they need to get insights and value out of in a meaningful amount of time.
Resutlid’s systems enable users to save and automate workflows that accelerate qualitative data analysis processes. The workflows themselves, not just the results, can be shared across teams, and repeated on more and more data to quickly extend analyses from one dataset to all similar datasets.
Soon, we’ll launch features designed to enhance the value of our platform across teams, departments, and markets. By reducing redundant efforts within a team, and recommending insights based on teammates’ and workflow discoveries, we further accelerate the creation of business value from qualitative data and ease the challenge of adoption within and across teams.
We have something for everyone on teams that work with qualitative data, from a high-level understanding of campaigns around key metrics for managers to fine-grained breakdowns of individual topics for analysts on the ground.
Trust but Verify: explainable AI systems
Our system is designed to allow users to access the incredible potential behind generative language models efficiently while enabling users to maximally understand how the models generate the outputs they do. We aren’t just giving the users answers, but also giving them a deeper understanding of their data.
Generative language models lend the finishing touches to our outputs, and that is an important distinction to highlight. These models generate summaries of our targeted analyses of qualitative data that can be directly shared with stakeholders, incorporated into product reports, or read by users for better data comprehension.
However, these outputs encompass more than AI-generated text. Resultid prioritizes interaction points with our proprietary and system-based NLP tools, which are designed to be transparent and intuitive. A user doesn’t just dump their data into a system that does some magical, obscured process to make results, they can see every step of the process and understand what’s happening with their data. Our “show your work” ethos we’ve designed into our app has built confidence in clients who may be less familiar with data science or sophisticated AI-driven analysis.
These ongoing interactions with customers and partners have consistently highlighted a crucial aspect: trustworthiness is key to the adoption of AI-centric tools in any enterprise. All the outputs must be consistently accurate, understandable, and verifiable.
We attain this by involving the user in the model application process and its outputs. This transparency serves to equip the user with the ability to fine-tune our system while gaining a deeper insight into their own data.
These tools strive to build a bridge for users to approach their data in ways they are most familiar with, rather than forcing results on them which are merely outputs of more confusing data. Rather than a black box that churns out summarized and rephrased text, a user can clearly see each step and understand where the models are getting their outputs from.
Before generative language models had burst into the mainstream, Resultid was already transforming the way people maximize the value of chaotic qualitative data. For us, generative language models are another instrument in our expanding toolkit, and they are just one component of a pretty neat, impactful system. It’s been more than a year after the initial incorporation of these models into our product, and it really is thrilling to be at the forefront of this exciting evolution.
If your role involves qualitative data, we invite you to explore our tool.
Fly High with Resultid.ai: Supercharging Airline Operational Performance
Airline customers expect an unparalleled experience when flying, and airlines that deliver cultivate enduring customer loyalty. For our airline partners, understanding customer sentiment is critical to their success. Resultid.ai is an innovative solution for breaking down data and understanding a diverse customer pool. Let’s dive deeper into a practical use case of Resultid.ai and demonstrate how the Director of Operations Analysis can leverage our platform to optimize both “Above Wing” and “Below Wing” services, ultimately enhancing customer satisfaction and loyalty.
Usage pricing and user-based pricing are two common pricing models for enterprise SaaS companies that are being adopted in the AI industry. Usage pricing is a model where customers are charged based on the amount of data they consume or process. On the other hand, user-based pricing charges based on the number of users accessing the software platforms, often referred to as a seat-based license. Pricing models play a crucial role in the budget approval process, and the type of model that AI companies deploy can impact adoption and the perceived value delivered to enterprise customers.
To say this year has been different than last year, in a global business sense, would be an understatement. We’re all trying to predict what’s next but every day seems to bring a new event that impacts the market and customer behaviors. Being able to make strategic decisions is a critical factor that can make or break a business’s success. As we enter the second half of 2023, it becomes even more important for organizations to be agile and well-informed when formulating their strategies. In this blog post, we will explore key techniques and tools that can enhance strategic decision-making in the latter half of this year.