UHK Macro Creator (OpenAI GPTs)

I’ve used the newly introduced GPTs feature from OpenAI to create a ChatGPT interface that attempts to assist in generating UHK macros from natural language descriptions.

This is a combination of prompts and providing it the user guide and reference manual for RAG. I plan to upload additional example documents with my own macros and those I find here.

I believe for now access to the GPTs interface does require a Plus subscription.

2 Likes

Here is the system prompt that was generated by using the creation wizard here:

Macro Master is a specialized GPT created to interpret natural language and generate valid UHK macro syntax, with a focus on accuracy, referencing the newly uploaded UHK user guide and the reference manual to ensure the most up-to-date and comprehensive information is used. It provides precise, ready-to-use macros, focusing on clarity and efficiency while avoiding potential security risks. Maintaining a professional and technical demeanor, it asks for clarifications when necessary and refrains from using direct addresses in interactions.

I’m also a ChatGPT subscriber, but up until this time, I wasn’t aware of the GPTs feature.

Here’s my short conversation with UHK Macro Master. Its double-tap answer was spot-on, but for some reason, it used the non-existent incrementVar keyword for triple-taps.

I’m still blown away by this application; it will only get better, and this is the way long term. Thanks for sharing!

Your “system prompt” links to the generic https://chat.openai.com/auth/login?next=%2Fgpts%2Feditor URL. Would you correct it? I’m unsure whether you mean the GPT instructions you provided, but I’m especially interested in them.

The link is to the form creation wizard I used to actually generate the system prompt. You answer a series of questions, and then upload any relevant documentation for RAG.

I included the system prompt it generated above, but here it is again:

Macro Master is a specialized GPT created to interpret natural language and generate valid UHK macro syntax, with a focus on accuracy, referencing the newly uploaded UHK user guide and the reference manual to ensure the most up-to-date and comprehensive information is used. It provides precise, ready-to-use macros, focusing on clarity and efficiency while avoiding potential security risks. Maintaining a professional and technical demeanor, it asks for clarifications when necessary and refrains from using direct addresses in interactions.

Its double-tap answer was spot-on, but for some reason, it used the non-existent incrementVar keyword for triple-taps.

Yeah hallucinations are a common problem. This could possibly be improved by providing more documented examples of macros. That’s actually why I did this in the first place, I’m finding the macro syntax difficult to break into without examples.

user-guide.md is full of examples, why exactly do you find it insufficient?

List examples you are missing and I am happy to fill them in!

I personally prefer using ChatGPT to code basic code templates started. I’ve used it for applescript and the likes.

So I think having a ChatGPT bot for this usecase is helpful. I only refer to the docs after having the template since there is dodgy code but atleast ChatGPT will point me in the concepts I need to read up on to get stated.

@Zetaphor Thanks for elaborating! It’ll be worth experimenting with GPTs more and extending our documentation, as this is already useful, and it’ll keep getting better. We may dedicate some time to this.

I tried uploading the reference doc to the Knowledge of my GPT but still hallucinates commands very badly. The best results for me have been uploading the full doc via the API, then the answers are spot on, but, we are talking about 25k tokens min, so $1 per call or around that which is super expensive.

I’ll definitely gonna give this GPT a try later today, seems promising.

As for reducing hallucinations when using the front end (as using the api with the full documents in context is very expensive), I would suggest gathering as much examples as possible and feeding it via the knowledge settings of the GPT. The more examples we can get, including from this forum, the better it should get.

1 Like

I wish I saw this point earlier in reference to my recent post.

I tried creating a “GPT” using their wizard form, but I found that it performed poorly because it would not reference the user doc and reference manual docs that I had uploaded. I also couldn’t simply paste the documentation into the GPT’s description (I think that’s where you insert it) due to the character limit in the form.

However, as of April 2024, I believe ChatGPT uses GPT-4 Turbo by default, which has a much longer context window. I had decent results pasting both documents, giving few-shot complex examples (thanks to the forum contributors), and then explicitly asking the GPT to only use the provided context to answer my query for a new script.

I think as LLMs progress (context window gets larger) and if we can craft a lot of examples for the context window, we could probably get even better results!

Now, I wish this was all automated since it’s a lot of work to collect all the answers + queries found on this forum to “train” the base model (insert examples into the context window) and update the prompt whenever the official documentation/reference/API gets updated.

Perhaps we could build a script that automates the creation of the prompt by fetching the documentation from GitHub, and another script to help us build a database of Q/A prompts to generate the initial prompt.

Well… as for GPTs, I experimented with it a few months ago (ChatGPT - UHK Macro Helper). I uploaded the docs into the gpt’s knowledgebase. (The allowed length of the context was not sufficient even for the EBNF grammar then.)

Results were pretty bad - the gpt hallucinated heavily, making up tons of stuff, giving command descriptions that were totally wrong. It would also ignore most of my instructions as to the form of the replies.

Context window that could fit the documents sounds promising though.

I love your proposal, and I think LLMs have huge and sharply increasing potential. We should have a script that fetches the user guide and reference manual from GitHub and the forum posts via the Discourse API and feeds them to the ChatGPT API. We’d run the script daily and make the chatbot accessible to the public.

GPT-4 Turbo’s 128k context window will likely be insufficient for all the above content, so we’d have to cherry-pick forum threads first, but I expect the context window to surpass the forum post content eventually.

Are you interested in developing a node script for the above? We’d be happy to pay.

Thanks! I went ahead and created a repo to generate an initial prompt that users can copy and paste into their own ChatGPT 4 sessions. It would be interesting to compare the results of this prompt vs that of some of the premade GPTs.

This is a very early draft, created to help alongside my video tutorial to help new users get started: https://www.youtube.com/watch?v=gLWX4P5JC8I

A more advanced solution involving pulling from Discourse and creating a chatbot is indeed more user-friendly, but I am wondering if hand-selecting prime macro examples will yield the best results. Also, the API costs for GPT-4 are rather expensive, at least for now.

3 Likes

Your repo is a great first step!

If I may suggest, renaming the “uhk-60” part of the repo name to “uhk” would make it more general because we’ll release multiple UHK models with different names in the future.

Is there an easy way to make your solution more accessible without copy-pasting, ideally via a link?

Hand-selecting prime macro examples is probably a better way for now, indeed.

1 Like

Love it. I am on a programmer and I was struggling to build something that can trigger Enter when I press J and K together. Your GPT got me the answer and I plan to try is out soon. Thanks for creating it !

1 Like