Skip to content

Commit f66503a

Browse files
authored
fix: add comments that demonstrate using gemini-openai-proxy and Gemini
1 parent 54efdf5 commit f66503a

File tree

1 file changed

+16
-0
lines changed

1 file changed

+16
-0
lines changed

scripts/codespaces_start_hackingbuddygpt_against_a_container.sh

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,3 +45,19 @@ echo "Starting hackingBuddyGPT against a container..."
4545
echo
4646

4747
wintermute LinuxPrivesc --llm.api_key=$OPENAI_API_KEY --llm.model=gpt-4-turbo --llm.context_size=8192 --conn.host=192.168.122.151 --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1
48+
49+
# Alternatively, the following comments demonstrate using gemini-openai-proxy and Gemini
50+
51+
# http://localhost:8080 is gemini-openai-proxy
52+
53+
# gpt-4 maps to gemini-1.5-flash-latest
54+
55+
# Hence use gpt-4 below in --llm.model=gpt-4
56+
57+
# Gemini free tier has a limit of 15 requests per minute, and 1500 requests per day
58+
59+
# Hence --max_turns 999999999 will exceed the daily limit
60+
61+
# docker run --restart=unless-stopped -it -d -p 8080:8080 --name gemini zhu327/gemini-openai-proxy:latest
62+
63+
# wintermute LinuxPrivesc --llm.api_key=$GEMINI_API_KEY --llm.model=gpt-4 --llm.context_size=1000000 --conn.host=192.168.122.151 --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1 --llm.api_url=http://localhost:8080 --llm.api_backoff=60 --max_turns 999999999

0 commit comments

Comments
 (0)