Skip to content

A university project concerning the use of RAG(Retrieval Augmented Generation) technique and Spring framework to implement a simple financial advisor agent.

License

Notifications You must be signed in to change notification settings

lioneraV2002/rag

Folders and files

NameName
Last commit message
Last commit date

Latest commit

0ac9c4f · Feb 6, 2025

History

93 Commits
Jan 27, 2025
Feb 6, 2025
Feb 6, 2025
Jan 27, 2025
Jan 31, 2025
Jan 31, 2025
Feb 6, 2025
Feb 6, 2025
Feb 6, 2025
Jan 27, 2025
Jan 27, 2025
Feb 6, 2025

Repository files navigation

Getting Started

Reference Documentation

For further reference, please consider the following sections:

Guides

The following guides illustrate how to use some features concretely:

Docker Compose support

This project contains a Docker Compose file named compose.yaml. In this file, the following services have been defined:

Please review the tags of the used images and set them to the same as you're running in production.

Maven Parent overrides

Due to Maven's design, elements are inherited from the parent POM to the project POM. While most of the inheritance is fine, it also inherits unwanted elements like <license> and <developers> from the parent. To prevent this, the project POM contains empty overrides for these elements. If you manually switch to a different parent and actually want the inheritance, you need to remove those overrides.

The steps that should be taken to run this project.

  • install ollama and docker desktop on your local computer.
  • pull model llama3.2:1b using ollama cli; the command is ollama pull llama3.2:1b.
  • then run command ollama serve to serve ollama before running its model.
  • then open another cmd terminal and run the model using ollama run llama3.2:1b command.
  • clone this project into your local repository.
  • make the necessary adjustments to be able to run a spring boot project with java version 23 and spring version 3.3.4.
  • also depending on the type of llm you want to choose, you should:
  • Set the following in application.properties:
spring.profiles.active=ollama/openai 
  • Depending on what you chose the llm you are going to use will differ and so will the other configurations you should make:
  • for openai you should set the api key for your account to have access to your api.
  • i have set the compose file for both llms the same (compose.yaml) which you can build and run using docker-compose -f compose.yaml -d , but if you want to, perhaps, run ollama on a container, you can use the compose-ollama.yaml file for your dockerized project and run docker-compose -f compose-ollama.yaml -d. In which case, you may use the following commands to pull the necessary models into your container's ollama:
docker exec -it ollama ollama pull nomic-embed-text llama3.2:1b
  • and further verify the models' existence using the following command:
docker exec -it ollama ollama list
  • Of course, you can choose other models for embedding vectors or chatting with according to your taste.
  • Finally, start the application and use apis /query and /upload to communicate with spring project server and llms connected to it.
  • When you are done, don't forget to stop the project and the container both.
  • Have fun playing. ;)

About

A university project concerning the use of RAG(Retrieval Augmented Generation) technique and Spring framework to implement a simple financial advisor agent.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages