Add options to allow response to external messages
This commit is contained in:
parent
086f3b6ede
commit
f142ad562e
98
README.md
Normal file → Executable file
98
README.md
Normal file → Executable file
@ -1,49 +1,49 @@
|
|||||||
# How Much Do You Not Give a F***?
|
# How Much Do You Not Give a F***?
|
||||||
A fun application that connects to OpenAI APIs to determine just how much you don't give a f***!
|
A fun application that connects to OpenAI APIs to determine just how much you don't give a f***!
|
||||||
|
|
||||||
## Getting Started
|
## Getting Started
|
||||||
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
|
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
* Docker
|
* Docker
|
||||||
* An OpenAI API key, obtained from the OpenAI website.
|
* An OpenAI API key, obtained from the OpenAI website.
|
||||||
|
|
||||||
## Installing
|
## Installing
|
||||||
1. Clone the repository to your local machine:
|
1. Clone the repository to your local machine:
|
||||||
``` bash
|
``` bash
|
||||||
git clone https://git.adamoutler.com/aoutler/aidgaf-server
|
git clone https://git.adamoutler.com/aoutler/aidgaf-server
|
||||||
```
|
```
|
||||||
2. Open the project in Visual Studio Code.
|
2. Open the project in Visual Studio Code.
|
||||||
3. Follow the prompts to open the Devcontainer and begin developing.
|
3. Follow the prompts to open the Devcontainer and begin developing.
|
||||||
|
|
||||||
## Development
|
## Development
|
||||||
1. Set the APIKEY environment variable by using the Secrets extension in Visual Studio Code.
|
1. Set the APIKEY environment variable by using the Secrets extension in Visual Studio Code.
|
||||||
2. Press F5 in Visual Studio Code to start the server.
|
2. Press F5 in Visual Studio Code to start the server.
|
||||||
## Usage
|
## Usage
|
||||||
The server accepts a message in the following format:
|
The server accepts a message in the following format:
|
||||||
|
|
||||||
``` json
|
``` json
|
||||||
{"message":{"command":"aidgaf","data":{"username":"AdamOutler"},"timestamp":1676231329}}
|
{"message":{"command":"aidgaf","data":{"username":"AdamOutler"},"timestamp":1676231329}}
|
||||||
```
|
```
|
||||||
## Built With
|
## Built With
|
||||||
* [Docker](https://www.docker.com/)
|
* [Docker](https://www.docker.com/)
|
||||||
* [Visual Studio Code](https://code.visualstudio.com/)
|
* [Visual Studio Code](https://code.visualstudio.com/)
|
||||||
* [OpenAI](https://openai.com/)
|
* [OpenAI](https://openai.com/)
|
||||||
* [Python](https://www.python.org/)
|
* [Python](https://www.python.org/)
|
||||||
* [Automated builds by Jenkins](https://jenkins.adamoutler.com/blue/organizations/jenkins/Update%20IDGAF%20Server/activity)
|
* [Automated builds by Jenkins](https://jenkins.adamoutler.com/blue/organizations/jenkins/Update%20IDGAF%20Server/activity)
|
||||||
## Contributing
|
## Contributing
|
||||||
If you would like to contribute to this project, please fork the repository and submit a pull request.
|
If you would like to contribute to this project, please fork the repository and submit a pull request.
|
||||||
|
|
||||||
## License
|
## License
|
||||||
Copyright 2023 Adam Outler
|
Copyright 2023 Adam Outler
|
||||||
|
|
||||||
Licensed under the I Dont Give A F License, Version 1.0 (the "License");
|
Licensed under the I Dont Give A F License, Version 1.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
|
|
||||||
3. Send an email to idgaf@hackedyour.info if you find this helpful.
|
3. Send an email to idgaf@hackedyour.info if you find this helpful.
|
||||||
Note: If you're wondering where number 1 and 2 are, IDGAF.
|
Note: If you're wondering where number 1 and 2 are, IDGAF.
|
||||||
|
|
||||||
## Acknowledgments
|
## Acknowledgments
|
||||||
* This README was generated using OpenAI's language model, ChatGPT.
|
* This README was generated using OpenAI's language model, ChatGPT.
|
||||||
* The Python code in other areas was documented using Github Copilot.
|
* The Python code in other areas was documented using Github Copilot.
|
||||||
* AI is used for documentation because the author didn't give a f*** enough to write it himself.
|
* AI is used for documentation because the author didn't give a f*** enough to write it himself.
|
||||||
|
@ -76,11 +76,18 @@ def get_prompt(command) -> dict:
|
|||||||
Returns:
|
Returns:
|
||||||
A dictionary containing the data to send to OpenAI.
|
A dictionary containing the data to send to OpenAI.
|
||||||
"""
|
"""
|
||||||
|
replyTo=command['message']['data']['replyTo']
|
||||||
|
replyText=command['message']['data']['replyText']
|
||||||
|
inputText=command['message']['data']['inputText']
|
||||||
my_prompt = random.choice(settings.PROMPTS)
|
my_prompt = random.choice(settings.PROMPTS)
|
||||||
my_prompt = my_prompt.replace(
|
my_prompt = my_prompt.replace(
|
||||||
"USERNAME", command['message']['data']['username'])
|
"USERNAME", command['message']['data']['username'])
|
||||||
|
if replyTo:
|
||||||
print("Prompt selected: "+my_prompt)
|
my_prompt=replyTo +"said \""+replyText+".\"\n In response, "+my_prompt
|
||||||
|
if inputText:
|
||||||
|
my_prompt="With the following in mind: "+ command['message']['data']['username'] +" doesn't care about \""+inputText+"\".\n\n"+my_prompt
|
||||||
|
|
||||||
|
|
||||||
the_data = DATA
|
the_data = DATA
|
||||||
the_data["prompt"] = my_prompt
|
the_data["prompt"] = my_prompt
|
||||||
return the_data
|
return the_data
|
||||||
|
@ -26,9 +26,9 @@ SERVERPORT: int = 8087
|
|||||||
""" The prompts used for OpenAI. When the server receives a request, it will
|
""" The prompts used for OpenAI. When the server receives a request, it will
|
||||||
randomly select one of these prompts to use."""
|
randomly select one of these prompts to use."""
|
||||||
PROMPTS = [
|
PROMPTS = [
|
||||||
# "Say \"USERNAME does not give a fuck\" as a haiku and mention that it is a haiku.",
|
"Say \"USERNAME does not give a fuck\" using 4 separate Haikus, and be sure to mention they are haikus before or after.",
|
||||||
"Say \"USERNAME does not give a fuck\" in a Dr Suess poem.",
|
"Say \"USERNAME does not give a fuck\" within a 10 line Dr Suess poem." #,
|
||||||
"Tell me a funny, impossible, story about USERNAME. Make USERNAME seem relatable at the end. Make up an outrageous situation where the moral of the story is: \"USERNAME does not give a fuck\" to this very day."
|
"Tell me a funny, impossible, story about USERNAME. Make USERNAME seem relatable at the end. Make up an outrageous situation where the moral of the story is: \"USERNAME does not give a fuck\" to this very day.",
|
||||||
"Say \"USERNAME is completely apethetic and does not give a fuck\" in a verbose manner, using your most colorful words and one metaphor."
|
"Say \"USERNAME is completely apethetic and does not give a fuck\" in a verbose manner, using your most colorful words and one metaphor."
|
||||||
]
|
]
|
||||||
|
|
||||||
@ -39,7 +39,7 @@ OPEN_AI_MAX_TOKENS = 500
|
|||||||
OPEN_AI_COMPLETION_MODEL = "text-davinci-003"
|
OPEN_AI_COMPLETION_MODEL = "text-davinci-003"
|
||||||
|
|
||||||
""" The temperature to use for OpenAI. 0-2, 0 is basicall repeating the prompt, 2 is more random. """
|
""" The temperature to use for OpenAI. 0-2, 0 is basicall repeating the prompt, 2 is more random. """
|
||||||
TEMPERATURE = 0.7
|
TEMPERATURE = 0.8
|
||||||
|
|
||||||
""" The hash key for the server. Leave this blank if you don't want to use it. """
|
""" The hash key for the server. Leave this blank if you don't want to use it. """
|
||||||
HASHKEY = bytes(os.getenv('HASHKEY') or "",UTF8) # shared secret for hmac of message
|
HASHKEY = bytes(os.getenv('HASHKEY') or "",UTF8) # shared secret for hmac of message
|
||||||
|
Loading…
x
Reference in New Issue
Block a user