Use AutoResponder to connect the Gemini AI by Google with WhatsApp, Telegram, Facebook Messenger, Instagram, Viber or Signal. Your conversations can be handled automatically by an AI trained on extensive internet text, providing responses to nearly any question you can think of. Google's Gemini, as a chatbot, is capable of sending any replies you like in natural language.
Similar to ChatGPT, you can set the tone, style and formality of replies. You can also tell Gemini the personality it should immitate when replying (e.g. that of a coach). Or you define what information it should pay attention to when responding. Check out the examples and start right away.
Overview
Always note: Gemini can sometimes generate wrong information, harmful instructions, or biased content with its limited knowledge.
How to set up Gemini and AutoResponder
1. Get AutoResponder. Check the how-to if needed.
3. Copy your API key.
4. Create a new AutoResponder rule.
5. Tap the All-button at the top right corner if you want the AI to reply to any incoming messages. You can still limit replies to specific contacts. Otherwise, you can use the other Received message features of AutoResponder. Check the info buttons next to each option.
6. Make sure the Reply message field is blank. Then activate the Connect Google's Gemini AI checkbox below.
7. The prompt is automatically filled in, based on which the AI will generate an answer. You can always customize it to suit your needs. For example, add general information about your business that it should refer to when responding. Check here for how to design your prompt and combine it with the answer replacements of AutoResponder (previous messages, name, date, time etc.). But test with the ready-made text for now.
In AutoResponder, any text before the first 1:: or 2:: is a system message that you can use to tell Gemini how to behave and what information to include in the reply. Talk to it like you would talk to a normal person.
Use 1:: before any text you want to send with user role (as a user message that has received or should receive an automatic answer) and 2:: for model role (reply message of Gemini). These messages in combination with AutoResponders answer replacements are used to send the chat history to the AI along with every request, because it does NOT keep a history by itself. The last %message_512% answer replacement is especially important to tell Gemini what message you have received from your contact so that it can reply to that. You can send a longer message history by adding more 1:: and 2:: before the first 1:: and increasing the offset number of the prev_message and prev_reply answer replacements. Always make sure that your prompt ends with a 1:: user message, not a 2:: model message.
If you don't use 1:: or 2::, the prompt will be sent to the AI as a full user message. If you also leave %message_512% away, the prompt will be static. This is useful if you have a rule that for example replies to a keyword like "!joke" and your prompt is only "Tell me a joke".
8. Now paste the API key you copied earlier below. The other parameters are already pre-filled or optional. You don't have to change them, but you can see how it works here.
Change the model parameter to the version of Gemini you want to use (e.g. gemini-1.5-pro).
If you use the max_output_tokens parameter, answers can be cut off at a certain length. In this case just increase the value or tell Gemini that it should write shorter replies.
Reply prefix, Policy violation reply and Error reply are custom fields of AutoResponder. For example, you can use the robot emoji 🤖 followed by a space as an optional reply prefix to show your chat partner that the answer comes from an AI.
9. Save the rule. That's it! You can now test your Gemini and messenger integration 🎉
Tips and Tricks
1. Sometimes it may be better to append commands to the last user message to get more consistent results:
2. You can use AutoResponder's answer replacements to give Gemini better information. For example, it can address the user by name if you are using %name% in the prompt. The same is also possible with the date and time.
3. Google bills for used tokens (pieces of words). See their pricing. Note that all tokens of the prompt AND the reply are counted. To save reply tokens you can do several things:
Keep your prompt short (for example, by including less of your users' message history).
Use a less capable AI model instead of the default one (which results in answers that are not as good, but it can be useful for simpler tasks).
Reduce the max_output_tokens parameter (limits the length of answers).
Only use Gemini AI replies for specific received messages.
Write a specific prompt for a specific rule without including the incoming message of the user.
Reduce the maxlength number of the AutoResponder answer replacements like %message_512%. The "512" can be changed to limit the number of characters.
Just create normal AutoResponder rules for messages you receive very often.
Be creative :)
With the default settings, you can send around 10.000 AI messages per 1 USD. I think that's already a fair price considering how long it would take to write so many WhatsApp or other messenger messages manually.
Troubleshooting
Please see the toast message that appears in AutoResponder whenever a reply fails. Or contact me via mail. Keep in mind that Gemini can also reply with a blank message which results in AutoResponder not sending any reply. This is not the fault of AutoResponder. If you think there is an issue with AutoResponder not answering some messages, test it with a normal non-AI rule.