MSTY Crash Course

ai genai msty
MSTY logo over blurred smoky background

I have using LM Studio to run a local genAI instance. Unfortunately, I have been running into a few minor challenges. It does not run on my old Intel-based Macbook, and I have run into issues downloading models on one Surface. I decided to give MSTY a try. The MSTY AI app simplifies AI interaction by allowing users to easily connect with both local and online AI models, compare responses in real-time, and integrate real-time data. It prioritizes user-friendliness and privacy, making AI more accessible to a wider audience.

Installying MSTY is relatively trivial. Downloads are available for Windows (x64 CPU or GPU), Mac (Intel and M), and Linux (multiple packages). Choose whatever the appropriate download for your device, and follow the customary install procedure. Once MSTY is installed and running, you are presented with a choice to either SETUP LOCAL AI or ADD REMOTE MODEL PROVIDER. I chose the first option so that I can run the models locally on my device. This will initiate a download for Gemma2. When the download is complete, MSTY will configure the model and start a prompt.

If Gemma is not your favorite model or you just want to try something else, new models are just a few clicks away. Jump into Local AI Models then Browse & Download Models. I generally stick to the smaller models since I am typically running this on spare hardware that does not have the highest compute specs. Switching between models is as easy as selecting from the drop down menu.

Every model has different strengths and weaknesses, and no single model is a perfect fit for every use case. One of the nice features of MSTY is the ability to compare side-by-side responses between different models. Simply click Add Split Chat in the upper right corner of your client. This will split the chat window in half. Select the two models that you want to use. When you type your prompt on one side, it will be mirrored automatically to the other. Each model will answer in parallel. Of course, I do not do this very often because it requires more resouces to respond. I really do need to look into getting a memory optimized system.

I do like that I can attach documents to a chat to provide more context or to update them. For a while, I have been using a set of echowriting rules to proofread and update my blog articles so that they all have a similar writing style and format. (More on that later.) MSTY also supports retrieval-augmented generation (RAG). They call it Knowledge Stacks. This is a more efficient way to attach document collections or databases to a chat session.

Although I do not use them very often, MSTY does provide a Quick Prompts library with an array of prompts for various personas and use cases. I play with these when I want to see how different word choices impact the response quality.

I have been fairly happy with MSTY's performance so far. It's definitely easy to use. I am only using it to run models locally so I have not attempted to configure a remote model provider yet. Are you using MSTY or something else? What has your experience been so far?

Previous Post