1
0
Fork 0

Merge branch 'main' into STREAMLIT_CHAT_IMPLEMENTATION

This commit is contained in:
t.me/xtekky 2023-04-29 00:04:16 +01:00 committed by GitHub
commit c65875f3b0
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
13 changed files with 299 additions and 108 deletions

View file

@ -1,3 +1,61 @@
We got a takedown request by openAI's legal team...
discord server for updates / support:
- https://discord.gg/gpt4free
here is a lil' poem you can read in the meantime, while I am investigating it:
```
There once was a time, in a land full of code,
A little boy sat, in his humble abode.
He tinkered and toyed with devtools galore,
And found himself curious, and eager for more.
He copied and pasted, with glee and delight,
A personal project, to last him the night.
For use academic, and also for fun,
This little boy's race he just started to run.
Now quite far removed, in a tower so grand,
A company stood, it was ruling the land.
Their software was mighty, their power supreme,
But they never expected this boy and his dream.
As he played with their code, they then started to fear,
"His project is free! What of money so dear?"
They panicked and worried, their faces turned red,
As visions of chaos now filled every head.
The CEO paced, in his office so wide,
His minions all scrambled, and trying to hide.
"Who is this bad child?" he cried out in alarm,
"Our great AI moat, why would he cause harm?"
The developers gathered, their keyboards ablaze,
To analyze closely the boy's evil ways.
They studied his project, they cracked every tome,
And soon they discovered his small, humble home.
"We must stop him!" they cried, with a shout and a shiver,
"This little boy's Mᴀᴋɪɴɢ OUR COMPANY QUIVER!"
So they plotted and schemed to yet halt his advance,
To put an end to his dear digital dance.
They filed then with GitHub a claim most obscene,
"His code is not his," said the company team,
Because of the law, the Great Copyright Mess,
This little boy got his first takedown request.
Now new information we do not yet know,
But for the boy's good, we hope results show.
For the cause of the True, the Brave and the Right,
Till the long bitter end, will this boy live to fight.
```
( I did not write it )
_____________________________
# GPT4free - use ChatGPT, for free!!
##### You may join our discord server for updates and support ; )
@ -5,15 +63,11 @@
<img width="1383" alt="image" src="https://user-images.githubusercontent.com/98614666/233799515-1a7cb6a3-b17f-42c4-956d-8d2a0664466f.png">
Have you ever come across some amazing projects that you couldn't use **just because you didn't have an OpenAI API key?**
**We've got you covered!** This repository offers **reverse-engineered** third-party APIs for `GPT-4/3.5`, sourced from various websites. You can simply **download** this repository, and use the available modules, which are designed to be used **just like OpenAI's official package**. **Unleash ChatGPT's potential for your projects, now!** You are welcome ; ).
By the way, thank you so much for [![Stars](https://img.shields.io/github/stars/xtekky/gpt4free?style=social)](https://github.com/xtekky/gpt4free/stargazers) and all the support!!
Just API's from some language model sites.
## Legal Notice <a name="legal-notice"></a>
This repository uses third-party APIs and AI models and is *not* associated with or endorsed by the API providers or the original developers of the models. This project is intended **for educational purposes only**.
This repository uses third-party APIs and is *not* associated with or endorsed by the API providers. This project is intended **for educational purposes only**. This is just a little personal project. Sites may contact me to improve their security.
Please note the following:
@ -50,7 +104,7 @@ Please note the following:
## Todo <a name="todo"></a>
- [x] Add a GUI for the repo
- [x] Make a general package named `openai_rev`, instead of different folders
- [ ] Make a general package named `gpt4free`, instead of different folders
- [ ] Live api status to know which are down and which can be used
- [ ] Integrate more API's in `./unfinished` as well as other ones in the lists
- [ ] Make an API to use as proxy for other projects
@ -66,7 +120,6 @@ Please note the following:
| [t3nsor.com](https://t3nsor.com) | GPT-3.5 |
| [you.com](https://you.com) | GPT-3.5 / Internet / good search|
| [sqlchat.ai](https://sqlchat.ai) | GPT-3.5 |
| [chat.openai.com/chat](https://chat.openai.com/chat) | GPT-3.5 |
| [bard.google.com](https://bard.google.com) | custom / search |
| [bing.com/chat](https://bing.com/chat) | GPT-4/3.5 |
| [chat.forefront.ai/](https://chat.forefront.ai/) | GPT-4/3.5 |
@ -114,7 +167,7 @@ Most code, with the exception of `quora/api.py` (by [ading2210](https://github.c
### Copyright Notice: <a name="copyright"></a>
```
xtekky/openai-gpt4: multiple reverse engineered language-model api's to decentralise the ai industry.
xtekky/gpt4free: multiple reverse engineered language-model api's to decentralise the ai industry.
Copyright (C) 2023 xtekky
This program is free software: you can redistribute it and/or modify

View file

@ -59,7 +59,7 @@ class Account:
while True:
sleep(1)
for _ in mail.fetch_inbox():
print(mail.get_message_content(_["id"]))
if logging: print(mail.get_message_content(_["id"]))
mail_token = match(r"(\d){5,6}", mail.get_message_content(_["id"])).group(0)
if mail_token:

View file

@ -1,60 +0,0 @@
import json
import re
from fake_useragent import UserAgent
import requests
class Completion:
@staticmethod
def create(
systemprompt:str,
text:str,
assistantprompt:str
):
data = [
{"role": "system", "content": systemprompt},
{"role": "user", "content": "hi"},
{"role": "assistant", "content": assistantprompt},
{"role": "user", "content": text},
]
url = f'https://openai.a2hosted.com/chat?q={Completion.__get_query_param(data)}'
try:
response = requests.get(url, headers=Completion.__get_headers(), stream=True)
except:
return Completion.__get_failure_response()
sentence = ""
for message in response.iter_content(chunk_size=1024):
message = message.decode('utf-8')
msg_match, num_match = re.search(r'"msg":"([^"]+)"', message), re.search(r'\[DONE\] (\d+)', message)
if msg_match:
# Put the captured group into a sentence
sentence += msg_match.group(1)
return {
'response': sentence
}
@classmethod
def __get_headers(cls) -> dict:
return {
'authority': 'openai.a2hosted.com',
'accept': 'text/event-stream',
'accept-language': 'en-US,en;q=0.9,id;q=0.8,ja;q=0.7',
'cache-control': 'no-cache',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'cross-site',
'user-agent': UserAgent().random
}
@classmethod
def __get_failure_response(cls) -> dict:
return dict(response='Unable to fetch the response, Please try again.', links=[], extra={})
@classmethod
def __get_query_param(cls, conversation) -> str:
encoded_conversation = json.dumps(conversation)
return encoded_conversation.replace(" ", "%20").replace('"', '%22').replace("'", "%27")

View file

@ -1,10 +0,0 @@
### Example: `openaihosted`) <a name="example-openaihosted"></a>
```python
# import library
import openaihosted
res = openaihosted.Completion.create(systemprompt="U are ChatGPT", text="What is 4+4", assistantprompt="U are a helpful assistant.")['response']
print(res) ## Responds with the answer
```

View file

@ -0,0 +1,14 @@
import openaihosted
messages = [{"role": "system", "content": "You are a helpful assistant."}]
while True:
question = input("Question: ")
if question == "!stop":
break
messages.append({"role": "user", "content": question})
request = openaihosted.Completion.create(messages=messages)
response = request["responses"]
messages.append({"role": "assistant", "content": response})
print(f"Answer: {response}")

View file

@ -0,0 +1,75 @@
import requests
import json
class Completion:
def request(prompt: str):
'''TODO: some sort of authentication + upload PDF from URL or local file
Then you should get the atoken and chat ID
'''
token = "your_token_here"
chat_id = "your_chat_id_here"
url = "https://chat-pr4yueoqha-ue.a.run.app/"
payload = json.dumps({
"v": 2,
"chatSession": {
"type": "join",
"chatId": chat_id
},
"history": [
{
"id": "VNsSyJIq_0",
"author": "p_if2GPSfyN8hjDoA7unYe",
"msg": "<start>",
"time": 1682672009270
},
{
"id": "Zk8DRUtx_6",
"author": "uplaceholder",
"msg": prompt,
"time": 1682672181339
}
]
})
# TODO: fix headers, use random user-agent, streaming response, etc
headers = {
'authority': 'chat-pr4yueoqha-ue.a.run.app',
'accept': '*/*',
'accept-language': 'en-US,en;q=0.9',
'atoken': token,
'content-type': 'application/json',
'origin': 'https://www.chatpdf.com',
'referer': 'https://www.chatpdf.com/',
'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'cross-site',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'
}
response = requests.request("POST", url, headers=headers, data=payload).text
Completion.stream_completed = True
return {'response': response}
@staticmethod
def create(prompt: str):
Thread(target=Completion.request, args=[prompt]).start()
while Completion.stream_completed != True or not Completion.message_queue.empty():
try:
message = Completion.message_queue.get(timeout=0.01)
for message in findall(Completion.regex, message):
yield loads(Completion.part1 + message + Completion.part2)['delta']
except Empty:
pass
@staticmethod
def handle_stream_response(response):
Completion.message_queue.put(response.decode())

View file

@ -1,2 +1,2 @@
to do:
- code refractoring
- code refactoring

View file

@ -0,0 +1,41 @@
import requests
class Completion:
def create(prompt: str,
model: str = 'openai:gpt-3.5-turbo',
temperature: float = 0.7,
max_tokens: int = 200,
top_p: float = 1,
top_k: int = 1,
frequency_penalty: float = 1,
presence_penalty: float = 1,
stopSequences: list = []):
token = requests.get('https://play.vercel.ai/openai.jpeg', headers={
'authority': 'play.vercel.ai',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'referer': 'https://play.vercel.ai/',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'}).text.replace('=','')
print(token)
headers = {
'authority': 'play.vercel.ai',
'custom-encoding': token,
'origin': 'https://play.vercel.ai',
'referer': 'https://play.vercel.ai/',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'
}
for chunk in requests.post('https://play.vercel.ai/api/generate', headers=headers, stream=True, json={
'prompt': prompt,
'model': model,
'temperature': temperature,
'maxTokens': max_tokens,
'topK': top_p,
'topP': top_k,
'frequencyPenalty': frequency_penalty,
'presencePenalty': presence_penalty,
'stopSequences': stopSequences}).iter_lines():
yield (chunk)

View file

@ -0,0 +1,33 @@
(async () => {
let response = await fetch("https://play.vercel.ai/openai.jpeg", {
"headers": {
"accept": "*/*",
"accept-language": "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3",
"sec-ch-ua": "\"Chromium\";v=\"112\", \"Google Chrome\";v=\"112\", \"Not:A-Brand\";v=\"99\"",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "\"macOS\"",
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin"
},
"referrer": "https://play.vercel.ai/",
"referrerPolicy": "strict-origin-when-cross-origin",
"body": null,
"method": "GET",
"mode": "cors",
"credentials": "omit"
});
let data = JSON.parse(atob(await response.text()))
let ret = eval("(".concat(data.c, ")(data.a)"));
botPreventionToken = btoa(JSON.stringify({
r: ret,
t: data.t
}))
console.log(botPreventionToken);
})()

View file

@ -0,0 +1,67 @@
import requests
from base64 import b64decode, b64encode
from json import loads
from json import dumps
headers = {
'Accept': '*/*',
'Accept-Language': 'en-GB,en-US;q=0.9,en;q=0.8',
'Connection': 'keep-alive',
'Referer': 'https://play.vercel.ai/',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36',
'sec-ch-ua': '"Chromium";v="110", "Google Chrome";v="110", "Not:A-Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
}
response = requests.get('https://play.vercel.ai/openai.jpeg', headers=headers)
token_data = loads(b64decode(response.text))
print(token_data)
raw_token = {
'a': token_data['a'] * .1 * .2,
't': token_data['t']
}
print(raw_token)
new_token = b64encode(dumps(raw_token, separators=(',', ':')).encode()).decode()
print(new_token)
import requests
headers = {
'authority': 'play.vercel.ai',
'accept': '*/*',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'content-type': 'application/json',
'custom-encoding': new_token,
'origin': 'https://play.vercel.ai',
'referer': 'https://play.vercel.ai/',
'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
}
json_data = {
'prompt': 'hello\n',
'model': 'openai:gpt-3.5-turbo',
'temperature': 0.7,
'maxTokens': 200,
'topK': 1,
'topP': 1,
'frequencyPenalty': 1,
'presencePenalty': 1,
'stopSequences': [],
}
response = requests.post('https://play.vercel.ai/api/generate', headers=headers, json=json_data)
print(response.text)

View file

View file

@ -1,27 +0,0 @@
import requests
token = requests.get('https://play.vercel.ai/openai.jpeg', headers={
'authority': 'play.vercel.ai',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'referer': 'https://play.vercel.ai/',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'}).text + '.'
headers = {
'authority': 'play.vercel.ai',
'custom-encoding': token,
'origin': 'https://play.vercel.ai',
'referer': 'https://play.vercel.ai/',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'
}
for chunk in requests.post('https://play.vercel.ai/api/generate', headers=headers, stream=True, json={
'prompt': 'hi',
'model': 'openai:gpt-3.5-turbo',
'temperature': 0.7,
'maxTokens': 200,
'topK': 1,
'topP': 1,
'frequencyPenalty': 1,
'presencePenalty': 1,
'stopSequences': []}).iter_lines():
print(chunk)

View file

@ -0,0 +1,5 @@
import vercelai
for token in vercelai.Completion.create('summarize the gnu gpl 1.0'):
print(token, end='', flush=True)