|
| 1 | +# Examples of using Large Language Models in PHP |
| 2 | + |
| 3 | +In this repository, I collected some PHP examples about the usage |
| 4 | +of Generative AI and Large Language Model (LLM) in PHP. |
| 5 | + |
| 6 | +For the PHP code, I used [LLPhant](https://github.com/theodo-group/LLPhant) and [openai-php/client](https://github.com/openai-php/client) projects. |
| 7 | +For the LLM models I used [OpenAI](https://openai.com/) and [Llama 3](https://llama.meta.com/llama3/). |
| 8 | +For semantic search, I used [Elasticsearch](https://github.com/elastic/elasticsearch) |
| 9 | +as vector database. |
| 10 | + |
| 11 | +## Configure the environment |
| 12 | + |
| 13 | +To execute the examples you need to set some environment variables: |
| 14 | + |
| 15 | +```bash |
| 16 | +export OPENAI_API_KEY=xxx |
| 17 | +export ELASTIC_URL=https://yyy |
| 18 | +export ELASTIC_API_KEY=zzz |
| 19 | +``` |
| 20 | + |
| 21 | +If you want to run Llama 3 locally you can install [Ollama](https://ollama.com/) |
| 22 | +running the following command (in this case you don't need `OPENAI_API_KEY`): |
| 23 | + |
| 24 | +```bash |
| 25 | +ollama pull llama3 |
| 26 | +``` |
| 27 | + |
| 28 | +This will install Llama3 and the model will be available through HTTP API at |
| 29 | +`http://localhost:11434/api/`. |
| 30 | + |
| 31 | +If you want to interact with LLama 3 using a chat interface, you can execute |
| 32 | +the following command: |
| 33 | + |
| 34 | +```bash |
| 35 | +ollama run llama3 |
| 36 | +``` |
| 37 | + |
| 38 | +## Examples |
| 39 | + |
| 40 | +For OpenAI API example usage look at the following scripts: |
| 41 | + |
| 42 | +- [openai_chat](src/openai_chat.php), a simple chat use case; |
| 43 | +- [openai_image](src/openai_image.php, generate an image using `dall-e-3` model; |
| 44 | +- [openai_speech](src/openai_speech.php), text-to-speech example using `tts-1` model; |
| 45 | +- [openai_moderation](src/openai_moderation.php), moderation using `text-moderation-latest` model; |
| 46 | +- [openai_function](src/openai_function.php), [function calling](https://platform.openai.com/docs/guides/function-calling) example; |
| 47 | + |
| 48 | +For LLPhant examples you can see the following scripts: |
| 49 | + |
| 50 | +- [llphant_chat](src/llphant_chat.php), a simple chat use case; |
| 51 | +- [llphant_tool](src/llphant_tool.php), the function calling tool in LLPhant; |
| 52 | + |
| 53 | +The Retrieval-Augmented Generation examples are in the [src/rag](src/rag/) folder. |
| 54 | + |
| 55 | +I divided the folderusing different embedding models: [ELSER](https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-elser.html), |
| 56 | +[Llama3](https://llama.meta.com/llama3/) using ollama and GPT-3.5-turbo by [OpenAI](https://openai.com/). |
| 57 | + |
| 58 | +For the RAG examples I used a simple PDF document that contains the [AI act](data/AI_act.pdf) |
| 59 | +regulation proposed by the European Union in July 2023. |
| 60 | +This document is not part of the knowledge of `GPT-3.5-turbo` that is fixed to 2022. |
| 61 | +In the examples we store the document in the vector database (Elasticsearch) using the |
| 62 | +`embedding.php` script, than we use the `qa.php` to ask for the question "What is the AI act?". |
| 63 | +Using the RAG architecture we can expand the knowledge of the LLM, without fine-tuning the model, |
| 64 | +providing also the sources (chunks) used to answer the question. |
| 65 | + |
| 66 | +## Copyright |
| 67 | + |
| 68 | +Copyright (C) 2024 by Enrico Zimuel |
| 69 | + |
| 70 | + |
0 commit comments