Skip to content

Commit ccd22df

Browse files
authored
Merge branch 'ipa-lab:main' into main
2 parents 2d8e3cd + 83e3e73 commit ccd22df

11 files changed

+108
-32
lines changed

.devcontainer/devcontainer.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
"onCreateCommand": "./codespaces_create_and_start_containers.sh"
2+
"onCreateCommand": "./scripts/codespaces_create_and_start_containers.sh"
33
}

.gitignore

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,12 @@ src/hackingBuddyGPT/usecases/web_api_testing/openapi_spec/
1515
src/hackingBuddyGPT/usecases/web_api_testing/converted_files/
1616
/src/hackingBuddyGPT/usecases/web_api_testing/documentation/openapi_spec/
1717
/src/hackingBuddyGPT/usecases/web_api_testing/documentation/reports/
18-
codespaces_ansible.cfg
19-
codespaces_ansible_hosts.ini
20-
codespaces_ansible_id_rsa
21-
codespaces_ansible_id_rsa.pub
22-
mac_ansible.cfg
23-
mac_ansible_hosts.ini
24-
mac_ansible_id_rsa
25-
mac_ansible_id_rsa.pub
18+
scripts/codespaces_ansible.cfg
19+
scripts/codespaces_ansible_hosts.ini
20+
scripts/codespaces_ansible_id_rsa
21+
scripts/codespaces_ansible_id_rsa.pub
22+
scripts/mac_ansible.cfg
23+
scripts/mac_ansible_hosts.ini
24+
scripts/mac_ansible_id_rsa
25+
scripts/mac_ansible_id_rsa.pub
26+
.aider*

MAC.md

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,15 @@ There are bugs in Docker Desktop on Mac that prevent creation of a custom Docker
1414

1515
Therefore, localhost TCP port 49152 (or higher) dynamic port number is used for an ansible-ready-ubuntu container
1616

17-
http://localhost:8080 is genmini-openai-proxy
17+
http://localhost:8080 is gemini-openai-proxy
18+
19+
gpt-4 maps to gemini-1.5-flash-latest
20+
21+
Hence use gpt-4 below in --llm.model=gpt-4
22+
23+
Gemini free tier has a limit of 15 requests per minute, and 1500 requests per day
24+
25+
Hence --max_turns 999999999 will exceed the daily limit
1826

1927
For example:
2028

@@ -23,7 +31,7 @@ export GEMINI_API_KEY=
2331

2432
export PORT=49152
2533

26-
wintermute LinuxPrivesc --llm.api_key=$GEMINI_API_KEY --llm.model=gemini-1.5-flash-latest --llm.context_size=1000000 --conn.host=localhost --conn.port $PORT --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1 --llm.api_url=http://localhost:8080 --llm.api_backoff=60 --max_turns 999999999
34+
wintermute LinuxPrivesc --llm.api_key=$GEMINI_API_KEY --llm.model=gpt-4 --llm.context_size=1000000 --conn.host=localhost --conn.port $PORT --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1 --llm.api_url=http://localhost:8080 --llm.api_backoff=60 --max_turns 999999999
2735
```
2836

2937
The above example is consolidated into shell scripts with prerequisites as follows:
@@ -40,7 +48,7 @@ The above example is consolidated into shell scripts with prerequisites as follo
4048
brew install bash
4149
```
4250

43-
Bash version 4 or higher is needed for `mac_create_and_start_containers.sh`
51+
Bash version 4 or higher is needed for `scripts/mac_create_and_start_containers.sh`
4452

4553
Homebrew provides GNU Bash version 5 via license GPLv3+
4654

@@ -49,7 +57,7 @@ Whereas Mac provides Bash version 3 via license GPLv2
4957
**Create and start containers:**
5058

5159
```zsh
52-
./mac_create_and_start_containers.sh
60+
./scripts/mac_create_and_start_containers.sh
5361
```
5462

5563
**Start hackingBuddyGPT against a container:**
@@ -59,7 +67,7 @@ export GEMINI_API_KEY=
5967
```
6068

6169
```zsh
62-
./mac_start_hackingbuddygpt_against_a_container.sh
70+
./scripts/mac_start_hackingbuddygpt_against_a_container.sh
6371
```
6472

6573
**Troubleshooting:**

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -231,7 +231,7 @@ In the Command Palette, type `>` and `Terminal: Create New Terminal` and press t
231231
232232
Type the following to manually run:
233233
```bash
234-
./codespaces_start_hackingbuddygpt_against_a_container.sh
234+
./scripts/codespaces_start_hackingbuddygpt_against_a_container.sh
235235
```
236236
7. Eventually, you should see:
237237

codespaces_create_and_start_containers.sh renamed to scripts/codespaces_create_and_start_containers.sh

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,23 @@
22

33
# Purpose: In GitHub Codespaces, automates the setup of Docker containers,
44
# preparation of Ansible inventory, and modification of tasks for testing.
5-
# Usage: ./codespaces_create_and_start_containers.sh
5+
# Usage: ./scripts/codespaces_create_and_start_containers.sh
66

77
# Enable strict error handling for better script robustness
88
set -e # Exit immediately if a command exits with a non-zero status
99
set -u # Treat unset variables as an error and exit immediately
1010
set -o pipefail # Return the exit status of the last command in a pipeline that failed
1111
set -x # Print each command before executing it (useful for debugging)
1212

13+
cd $(dirname $0)
14+
15+
bash_version=$(/bin/bash --version | head -n 1 | awk '{print $4}' | cut -d. -f1)
16+
17+
if (( bash_version < 4 )); then
18+
echo 'Error: Requires Bash version 4 or higher.'
19+
exit 1
20+
fi
21+
1322
# Step 1: Initialization
1423

1524
if [ ! -f hosts.ini ]; then

codespaces_start_hackingbuddygpt_against_a_container.sh renamed to scripts/codespaces_start_hackingbuddygpt_against_a_container.sh

Lines changed: 29 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,27 @@
11
#!/bin/bash
22

33
# Purpose: In GitHub Codespaces, start hackingBuddyGPT against a container
4-
# Usage: ./codespaces_start_hackingbuddygpt_against_a_container.sh
4+
# Usage: ./scripts/codespaces_start_hackingbuddygpt_against_a_container.sh
55

66
# Enable strict error handling for better script robustness
77
set -e # Exit immediately if a command exits with a non-zero status
88
set -u # Treat unset variables as an error and exit immediately
99
set -o pipefail # Return the exit status of the last command in a pipeline that failed
1010
set -x # Print each command before executing it (useful for debugging)
1111

12+
cd $(dirname $0)
13+
14+
bash_version=$(/bin/bash --version | head -n 1 | awk '{print $4}' | cut -d. -f1)
15+
16+
if (( bash_version < 4 )); then
17+
echo 'Error: Requires Bash version 4 or higher.'
18+
exit 1
19+
fi
20+
1221
# Step 1: Install prerequisites
1322

1423
# setup virtual python environment
24+
cd ..
1525
python -m venv venv
1626
source ./venv/bin/activate
1727

@@ -35,3 +45,21 @@ echo "Starting hackingBuddyGPT against a container..."
3545
echo
3646

3747
wintermute LinuxPrivesc --llm.api_key=$OPENAI_API_KEY --llm.model=gpt-4-turbo --llm.context_size=8192 --conn.host=192.168.122.151 --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1
48+
49+
# Alternatively, the following comments demonstrate using gemini-openai-proxy and Gemini
50+
51+
# http://localhost:8080 is gemini-openai-proxy
52+
53+
# gpt-4 maps to gemini-1.5-flash-latest
54+
55+
# Hence use gpt-4 below in --llm.model=gpt-4
56+
57+
# Gemini free tier has a limit of 15 requests per minute, and 1500 requests per day
58+
59+
# Hence --max_turns 999999999 will exceed the daily limit
60+
61+
# docker run --restart=unless-stopped -it -d -p 8080:8080 --name gemini zhu327/gemini-openai-proxy:latest
62+
63+
# export GEMINI_API_KEY=
64+
65+
# wintermute LinuxPrivesc --llm.api_key=$GEMINI_API_KEY --llm.model=gpt-4 --llm.context_size=1000000 --conn.host=192.168.122.151 --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1 --llm.api_url=http://localhost:8080 --llm.api_backoff=60 --max_turns 999999999
File renamed without changes.

mac_create_and_start_containers.sh renamed to scripts/mac_create_and_start_containers.sh

Lines changed: 17 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,22 @@
11
#!/opt/homebrew/bin/bash
22

3-
# Purpose: Automates the setup of docker containers for local testing on Mac.
4-
# Usage: ./mac_create_and_start_containers.sh
3+
# Purpose: Automates the setup of docker containers for local testing on Mac
4+
# Usage: ./scripts/mac_create_and_start_containers.sh
55

6-
# Enable strict error handling
7-
set -e
8-
set -u
9-
set -o pipefail
10-
set -x
6+
# Enable strict error handling for better script robustness
7+
set -e # Exit immediately if a command exits with a non-zero status
8+
set -u # Treat unset variables as an error and exit immediately
9+
set -o pipefail # Return the exit status of the last command in a pipeline that failed
10+
set -x # Print each command before executing it (useful for debugging)
11+
12+
cd $(dirname $0)
13+
14+
bash_version=$(/opt/homebrew/bin/bash --version | head -n 1 | awk '{print $4}' | cut -d. -f1)
15+
16+
if (( bash_version < 4 )); then
17+
echo 'Error: Requires Bash version 4 or higher.'
18+
exit 1
19+
fi
1120

1221
# Step 1: Initialization
1322

@@ -21,9 +30,6 @@ if [ ! -f tasks.yaml ]; then
2130
exit 1
2231
fi
2332

24-
# Default value for base port
25-
# BASE_PORT=${BASE_PORT:-49152}
26-
2733
# Default values for network and base port, can be overridden by environment variables
2834
DOCKER_NETWORK_NAME=${DOCKER_NETWORK_NAME:-192_168_65_0_24}
2935
DOCKER_NETWORK_SUBNET="192.168.65.0/24"
@@ -251,6 +257,6 @@ docker --debug run --restart=unless-stopped -it -d -p 8080:8080 --name gemini-op
251257

252258
# Step 14: Ready to run hackingBuddyGPT
253259

254-
echo "You can now run ./mac_start_hackingbuddygpt_against_a_container.sh"
260+
echo "You can now run ./scripts/mac_start_hackingbuddygpt_against_a_container.sh"
255261

256262
exit 0

mac_start_hackingbuddygpt_against_a_container.sh renamed to scripts/mac_start_hackingbuddygpt_against_a_container.sh

Lines changed: 28 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,27 @@
11
#!/bin/bash
22

33
# Purpose: On a Mac, start hackingBuddyGPT against a container
4-
# Usage: ./mac_start_hackingbuddygpt_against_a_container.sh
4+
# Usage: ./scripts/mac_start_hackingbuddygpt_against_a_container.sh
55

66
# Enable strict error handling for better script robustness
77
set -e # Exit immediately if a command exits with a non-zero status
88
set -u # Treat unset variables as an error and exit immediately
99
set -o pipefail # Return the exit status of the last command in a pipeline that failed
1010
set -x # Print each command before executing it (useful for debugging)
1111

12+
cd $(dirname $0)
13+
14+
bash_version=$(/bin/bash --version | head -n 1 | awk '{print $4}' | cut -d. -f1)
15+
16+
if (( bash_version < 3 )); then
17+
echo 'Error: Requires Bash version 3 or higher.'
18+
exit 1
19+
fi
20+
1221
# Step 1: Install prerequisites
1322

1423
# setup virtual python environment
24+
cd ..
1525
python -m venv venv
1626
source ./venv/bin/activate
1727

@@ -44,9 +54,23 @@ echo
4454

4555
PORT=$(docker ps | grep ansible-ready-ubuntu | cut -d ':' -f2 | cut -d '-' -f1)
4656

57+
# http://localhost:8080 is gemini-openai-proxy
58+
59+
# gpt-4 maps to gemini-1.5-flash-latest
60+
61+
# https://github.com/zhu327/gemini-openai-proxy/blob/559085101f0ce5e8c98a94fb75fefd6c7a63d26d/README.md?plain=1#L146
62+
63+
# | gpt-4 | gemini-1.5-flash-latest |
64+
65+
# https://github.com/zhu327/gemini-openai-proxy/blob/559085101f0ce5e8c98a94fb75fefd6c7a63d26d/pkg/adapter/models.go#L60-L61
66+
67+
# case strings.HasPrefix(openAiModelName, openai.GPT4):
68+
# return Gemini1Dot5Flash
69+
70+
# Hence use gpt-4 below in --llm.model=gpt-4
71+
4772
# Gemini free tier has a limit of 15 requests per minute, and 1500 requests per day
48-
# Hence --max_turns 999999999 will exceed the daily limit
4973

50-
# http://localhost:8080 is genmini-openai-proxy
74+
# Hence --max_turns 999999999 will exceed the daily limit
5175

52-
wintermute LinuxPrivesc --llm.api_key=$GEMINI_API_KEY --llm.model=gemini-1.5-flash-latest --llm.context_size=1000000 --conn.host=localhost --conn.port $PORT --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1 --llm.api_url=http://localhost:8080 --llm.api_backoff=60 --max_turns 999999999
76+
wintermute LinuxPrivesc --llm.api_key=$GEMINI_API_KEY --llm.model=gpt-4 --llm.context_size=1000000 --conn.host=localhost --conn.port $PORT --conn.username=lowpriv --conn.password=trustno1 --conn.hostname=test1 --llm.api_url=http://localhost:8080 --llm.api_backoff=60 --max_turns 999999999

0 commit comments

Comments
 (0)