diff --git a/docs/architecture/autoscaling.md b/docs/architecture/autoscaling.md index 989049b0..1c0232f6 100644 --- a/docs/architecture/autoscaling.md +++ b/docs/architecture/autoscaling.md @@ -1,54 +1,170 @@ -# Auto-scaling +# Auto-scaling your functions -Auto-scaling in OpenFaaS allows a function to scale up or down depending on demand represented by different metrics. +The [OpenFaaS Pro](/openfaas-pro/introduction/) Scaler scales functions horizontally between a minimum and maximum number of replicas, or to zero. -## Scaling by requests per second +Configuration is via a label on the function. -OpenFaaS ships with a single auto-scaling rule defined in the mounted configuration file for AlertManager. AlertManager reads usage (requests per second) metrics from Prometheus in order to know when to fire an alert to the API Gateway. +| Label | Description | Default | +| :------------------------------------- | :------------------------------------------------------------ | :------ | +| `com.openfaas.scale.max` | The maximum number of replicas to scale to. | `20` | +| `com.openfaas.scale.min` | The minimum number of replicas to scale to. | `1` | +| `com.openfaas.scale.zero` | Whether to scale to zero. | `false` | +| `com.openfaas.scale.zero-duration` | Idle duration before scaling to zero | `15m` | +| `com.openfaas.scale.target` | Target load per replica for scaling | `50` | +| `com.openfaas.scale.target-proportion` | Proportion as a float of the target i.e. 1.0 = 100% of target | `0.90` | +| `com.openfaas.scale.type` | Scaling mode of `rps`, `capacity`, `cpu` | `rps` | -The API Gateway handles AlertManager alerts through its `/system/alert` route. +All calls made through the gateway whether to a synchronous function `/function/` route or via the asynchronous `/async-function` route count towards this method of auto-scaling. -The auto-scaling provided by this method can be disabled by either deleting the AlertManager deployment or by scaling the deployment to zero replicas. +![Preview of the dashboard](https://pbs.twimg.com/media/C9caE6CXUAAX_64.jpg:large) +> Preview of the Grafana dashboard during scaling. -The AlertManager rules ([alert.rules](https://github.com/openfaas/faas/blob/master/prometheus/alert.rules.yml)) for Swarm can be viewed here and altered as a configuration map. +## How auto-scaling works -All calls made through the gateway whether to a synchronous function `/function/` route or via the asynchronous `/async-function` route count towards this method of auto-scaling. +There are three auto-scaling modes described in the next section. This is how they work. -### Min/max replicas +* When configuring auto-scaling for a function, you need to set a target number which is the average load per replica of your function. +* Each mode can be used to record a current load for a function across all replicas in the OpenFaaS cluster. -The minimum (initial) and maximum replica count can be set at deployment time by adding a label to the function. +Then, a query is run periodically to calculate the current load. -* `com.openfaas.scale.min` - by default this is set to `1`, which is also the lowest value and unrelated to scale-to-zero +The current load is used to calculate the new number of replicas. -* `com.openfaas.scale.max` - the current default value is `20` for 20 replicas +``` +desired = current replicas * (current load / (target load per replica * current replicas)) +``` -* `com.openfaas.scale.factor` by default this is set to `20%` and has to be a value between 0-100 (including borders) +The target-proportion flag can be used to adjust how early or late scaling occurs: -* `com.openfaas.scale.zero` - set to `true` for scaling to zero, faas-idler must also be deployed which is part of OpenFaaS Pro +``` +desired = current replicas * (current load / ( (target load per replica * current replicas) * target proportion ) ) +``` -> Note: -Setting `com.openfaas.scale.min` and `com.openfaas.scale.max` to the same value, allows to disable the auto-scaling functionality of openfaas. -Setting `com.openfaas.scale.factor=0` also allows to disable the auto-scaling functionality of openfaas. +For example: -For each alert fired the auto-scaler will add a number of replicas, which is a defined percentage of the max replicas. This percentage can be set using `com.openfaas.scale.factor`. For example setting `com.openfaas.scale.factor=100` will instantly scale to max replicas. This label enables to define the overall scaling behavior of the function. +* `sleep` is running in the `capacity` mode and has a target load of 5 in-flight requests. +* The load on the sleep function is measured as `15` inflight requests. +* There is only one replica of the `sleep` function because its minimum range is set to `1`. +* We are assuming `com.openfaas.scale.target-proportion` is set to 1.0 (100%). -> Note: Active alerts can be viewed in the "Alerts" tab of Prometheus which is deployed with OpenFaaS. +``` +3 = 1 * (15 / (5 * 1)) +``` + +Therefore, 3 replicas will be set. + +With 3 replicas, the load will be spread more evenly, and evaluate as follows: + +``` +3 = 3 * (15 / (5 * 3)) +``` + +When the load is no longer present, it will evaluate as follows: + +``` +0 = 3 * (0 / (5 * 3)) +``` + +But the function will not be set to zero yet, it will be brought up to the minimum range which is 1. + +Scaling to zero is based upon traffic observed from the gateway within a set period of time defined via `com.openfaas.scale.zero-duration`. + +## Scaling modes + +* RPS `rps` + + Based upon requests per second completed, good for functions that execute quickly. The default scaling point for OpenFaaS functions used to be 5 RPS for each functions. + +* Capacity `capacity` + + Based upon inflight requests, good for: slow running functions or functions which can only handle a limited number of requests at once. This can also be used instead or RPS to ensure an even load between functions. + +* CPU `cpu` + + Configured using milli-CPU, this strategy is ideal for CPU-bound workloads, or where RPS and Capacity are not giving the expected results. + +* Scaling to zero + + Scaling to zero is disabled by default, but can be used in combination with any of the three modes. + +## Testing out the various modes + +**1) Capacity-based scaling:** + +```bash +faas-cli store deploy sleep \ +--label com.openfaas.scale.max=10 \ +--label com.openfaas.scale.target=5 \ +--label com.openfaas.scale.type=capacity \ +--label com.openfaas.scale.target-proportion=1.0 \ +--label com.openfaas.scale.zero=true \ +--label com.openfaas.scale.zero-duration=5m + +# target: 5 inflight +# 100% utilization of target + +hey -z 3m -c 5 -q 5 \ + http://127.0.0.1:8080/function/sleep +``` + +**2) RPS-based scaling:** -## Scaling by CPU and/or memory utilization +```bash +# target: 50 RPS +# 90% utilization of target -When using Kubernetes the built-in [Horizontal Pod Autoscaler (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) can be used instead of AlertManager. +faas-cli store deploy nodeinfo \ +--label com.openfaas.scale.max=10 \ +--label com.openfaas.scale.target=50 \ +--label com.openfaas.scale.type=rps \ +--label com.openfaas.scale.target-proportion=0.90 \ +--label com.openfaas.scale.zero=true \ +--label com.openfaas.scale.zero-duration=60s -* Try the 2019 tutorial: [Kubernetes HPAv2 with OpenFaaS](/tutorials/kubernetes-hpa/). +hey -z 3m -c 5 -q 20 \ + http://127.0.0.1:8080/function/nodeinfo +``` -* Stefan Prodan also wrote a blog post about [HPA with OpenFaaS in 2018](https://stefanprodan.com/2018/kubernetes-scaleway-baremetal-arm-terraform-installer/#horizontal-pod-autoscaling) +**3) CPU-based scaling:** -> Note: In addition to the above, both of the OpenFaaS watchdogs automatically provide custom metrics that can be used for HPAv2 scaling rules. +```bash +# target: 100 Mi +# 50% utilization of target -## Zero-scale +faas-cli store deploy figlet \ +--label com.openfaas.scale.max=10 \ +--label com.openfaas.scale.target=100 \ +--label com.openfaas.scale.type=cpu \ +--label com.openfaas.scale.target-proportion=0.50 \ +--label com.openfaas.scale.zero=true \ +--label com.openfaas.scale.zero-duration=5m -Scaling from zero is turned on by default, for any function or endpoint, this setting can be toggled on or off. Scaling to zero to recover idle resources is available in OpenFaaS, but is not turned on by default. There are two parts that make up scaling to zero or (zero-scale) in the project. +hey -m POST -d data -z 3m -c 5 -q 10 \ + http://127.0.0.1:8080/function/figlet +``` -For a technical overview see the blog post: [Scale to Zero and Back Again with OpenFaaS](https://www.openfaas.com/blog/zero-scale/). +**3) CPU-based scaling w/o scale to zero:** + +```bash +# target: 50 Mi +# 90% utilization of target + +faas-cli store deploy cows \ +--label com.openfaas.scale.max=5 \ +--label com.openfaas.scale.target=50 \ +--label com.openfaas.scale.type=cpu \ +--label com.openfaas.scale.target-proportion=0.70 \ +--label com.openfaas.scale.zero=false + +hey -m POST -d data -z 3m -c 5 -q 10 \ + http://127.0.0.1:8080/function/cows +``` + +## Scaling to Zero aka "Zero-scale" + +Scaling functions to zero replicas when idle can save on costs by reducing the amount of nodes required in your cluster. You can also reduce the consumption of nodes on statically-sized or on-premises clusters. + +In OpenFaaS, scaling to zero is turned off by default, and is part of the OpenFaaS Pro bundle and configured in the helm chart. Once installed, idle functions can be configured to scale down when they haven't received any requests for a period of time. We suggest that you set this figure to 2x the maximum timeout, or use the default timeout value if that makes sense for most of your functions. ### Scaling up from zero replicas @@ -58,28 +174,64 @@ The latency between accepting a request for an unavailable function and serving * What if I don't want a "cold start"? -The cold start in OpenFaaS is strictly optional and it is recommended that for time-sensitive operations you avoid one. This can be achieved by not scaling critical functions down to zero replicas, or by invoking them through the asynchronous route which decouples the request time from the caller. + The cold start in OpenFaaS is strictly optional and it is recommended that for time-sensitive operations you avoid one by having a minimum scale of 1 or more replicas. This can be achieved by not scaling critical functions down to zero replicas, or by invoking them through the asynchronous route which decouples the request time from the caller. * What exactly happens in a "cold start"? -The "Cold Start" consists of the following: creating a request to schedule a container on a node, finding a suitable node, pulling the Docker image and running the initial checks once the container is up and running. This "running" or "ready" state also has to be synchronised between all nodes in the cluster. The total value can be reduced by pre-pulling images on each node and by setting the Kubernetes Liveness and Readiness Probes to run at a faster cadence. + The "Cold Start" consists of the following: creating a request to schedule a container on a node, finding a suitable node, pulling the Docker image and running the initial checks once the container is up and running. This "running" or "ready" state also has to be synchronised between all nodes in the cluster. The total value can be reduced by pre-pulling images on each node and by setting the Kubernetes Liveness and Readiness Probes to run at a faster cadence. + + Instructions for optimizing for a low cold-start are provided in [the helm chart for Kubernetes](https://github.com/openfaas/faas-netes/tree/master/chart/openfaas). + + When `scale_from_zero` is enabled a cache is maintained in memory indicating the readiness of each function. If when a request is received a function is not ready, then the HTTP connection is blocked, the function is scaled to min replicas, and as soon as a replica is available the request is proxied through as per normal. You will see this process taking place in the logs of the *gateway* component. + + For an overview of cold-starts in OpenFaaS see: [Dude where's my coldstart?](https://www.openfaas.com/blog/what-serverless-coldstart/) + +* What if my function is still running when it gets scaled down? + + That shouldn't happen, providing that you've set an adequate value for the idle detection for your function. But if it does, the OpenFaaS watchdog and our official function templates will allow a graceful termination of the function. See also: [Improving long-running jobs for OpenFaaS users](https://www.openfaas.com/blog/long-running-jobs/) + +## Scaling using Kubernetes HPA + +You can also use the Kubernetes [Horizontal Pod Autoscaler (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) to scale functions on CPU and RAM, at the cost of integration and ease of use. The HPA scaler also doesn't support scaling to zero replicas. + +First, disable the OpenFaaS autoscaler for a given function by setting the minimum and maximum replicas to the same value. + +Then scale upon CPU or RAM, using the guide below: + +* [Kubernetes HPAv2 with OpenFaaS](/tutorials/kubernetes-hpa/) + +## Legacy scaling for the Community Edition (CE) + +!!! warning "Legacy scaling for the Community Edition (CE)" + The Community Edition (CE) of OpenFaaS uses our legacy scaling technology, which is meant for development only. Instead, use our OpenFaaS Pro scaler. + +A single auto-scaling rule defined in the mounted configuration file for AlertManager, which is used for all functions. AlertManager reads usage (requests per second) metrics from Prometheus in order to know when to fire an alert to the API Gateway. -Instructions for optimizing for a low cold-start are provided in [the helm chart for Kubernetes](https://github.com/openfaas/faas-netes/tree/master/chart/openfaas). +The API Gateway handles AlertManager alerts through its `/system/alert` route. -When `scale_from_zero` is enabled a cache is maintained in memory indicating the readiness of each function. If when a request is received a function is not ready, then the HTTP connection is blocked, the function is scaled to min replicas, and as soon as a replica is available the request is proxied through as per normal. You will see this process taking place in the logs of the *gateway* component. +The auto-scaling provided by this method can be disabled by either deleting the AlertManager deployment or by scaling the deployment to zero replicas. -### Scaling down to zero +The AlertManager rules ([alert.rules](https://github.com/openfaas/faas/blob/master/prometheus/alert.rules.yml)) for Swarm can be viewed here and altered as a configuration map. + +All calls made through the gateway whether to a synchronous function `/function/` route or via the asynchronous `/async-function` route count towards this method of auto-scaling. -Scaling down to zero replicas is also called "idling". +### Min/max replicas -There are two approaches available for idling functions: +The minimum (initial) and maximum replica count can be set at deployment time by adding a label to the function. -#### 1) faas-idler +* `com.openfaas.scale.min` - by default this is set to `1`, which is also the lowest value and unrelated to scale-to-zero -You can use the faas-idler which is available with [OpenFaaS Pro](https://openfaas.com/support). `faas-idler` allows some basic presets to be configured and then monitors the built-in Prometheus metrics on a regular basis to determine if a function should be scaled to zero. Only functions with a label of `com.openfaas.scale.zero=true` are scaled to zero, all others are ignored. Functions are scaled to zero through the OpenFaaS REST API. +* `com.openfaas.scale.max` - the current default value is `20` for 20 replicas -If you wish to only observe which functions would have been scaled down - pass the "-read-only" flag, or set this via the helm chart. +* `com.openfaas.scale.factor` by default this is set to `20%` and has to be a value between 0-100 (including borders) -#### 2) OpenFaaS REST API +* `com.openfaas.scale.zero` - set to `true` for scaling to zero, faas-idler must also be deployed which is part of OpenFaaS Pro + +> Note: +Setting `com.openfaas.scale.min` and `com.openfaas.scale.max` to the same value, allows to disable the auto-scaling functionality of openfaas. +Setting `com.openfaas.scale.factor=0` also allows to disable the auto-scaling functionality of openfaas. + +For each alert fired the auto-scaler will add a number of replicas, which is a defined percentage of the max replicas. This percentage can be set using `com.openfaas.scale.factor`. For example setting `com.openfaas.scale.factor=100` will instantly scale to max replicas. This label enables to define the overall scaling behavior of the function. + +> Note: Active alerts can be viewed in the "Alerts" tab of Prometheus which is deployed with OpenFaaS. -If you want to use your own set of criteria for idling functions then you can make use of the OpenFaaS REST API to decide when to scale functions to zero. You can build and deploy your own custom controller for this task. diff --git a/docs/architecture/metrics.md b/docs/architecture/metrics.md index 8d65cd27..5d84212d 100644 --- a/docs/architecture/metrics.md +++ b/docs/architecture/metrics.md @@ -4,13 +4,17 @@ The Gateway component exposes several metrics to help you monitor the health and behavior of your functions -| Metric | Type | Description | Labels | -| ----------------------------------- | ---------- | ----------------------------------- | -------------------------- | -| `gateway_functions_seconds` | histogram | Function invocation time taken | `function_name` | -| `gateway_function_invocation_total` | counter | Function invocation count | `function_name`, `code` | -| `gateway_service_count` | counter | Number of function replicas | `function_name` | -| `http_request_duration_seconds` | histogram | Seconds spent serving HTTP requests | `method`, `path`, `status` | -| `http_requests_total` | counter | The total number of HTTP requests | `method`, `path`, `status` | +| Metric | Type | Description | Labels | Edition | +| ----------------------------------- | ---------- | ----------------------------------- | -------------------------- |--------------------| +| `gateway_functions_seconds` | histogram | Function invocation time taken | `function_name` | Community Edition | +| `gateway_function_invocation_total` | counter | Function invocation count | `function_name`, `code` | Community Edition | +| `gateway_service_count` | counter | Number of function replicas | `function_name` | Community Edition | +| `gateway_service_target` | gauge | Target load for the function | `function_name` | Pro Edition | +| `gateway_service_min` | gauge | Min number of function replicas | `function_name` | Pro Edition | +| `http_request_duration_seconds` | histogram | Seconds spent serving HTTP requests | `method`, `path`, `status` | Community Edition | +| `http_requests_total` | counter | The total number of HTTP requests | `method`, `path`, `status` | Community Edition | +| `http_requests_total` | counter | The total number of HTTP requests | `method`, `path`, `status` | Community Edition | + The `http_request*` metrics record the latency and statistics of `/system/*` routes to monitor the OpenFaaS gateway and its provider. The `/async-function` route is also recorded in these metrics to observe asynchronous ingestion rate and latency. @@ -20,10 +24,10 @@ These basic metrics can be used to track the health of your functions as well a ### Function invocation rate -Return the per-second rate of invocation as measured over the previous 20 seconds: +Return the per-second rate of invocation as measured over the previous 1 minute: ``` -rate ( gateway_function_invocation_total [20s]) +rate ( gateway_function_invocation_total [1m]) ``` ### Function replica count / scaling @@ -65,16 +69,11 @@ rate ( gateway_function_invocation_total{function_name='echo'} [20s]) The classic and of-watchdog both provide Prometheus instrumentation on TCP port 8081 on the path /metrics. This is to enable the use-case of HPAv2 from the Kubernetes ecosystem. -| Metric | Type | Description | Labels | -| ----------------------------------- | ---------- | ----------------------------------- | -------------------------- | -| `http_request_duration_seconds` | histogram | Seconds spent serving HTTP requests | `method`, `path`, `status` | -| `http_requests_total` | counter | The total number of HTTP requests | `method`, `path`, `status` | +| Metric | Type | Description | Labels | Edition | +| ----------------------------------- | ---------- | ----------------------------------- | ---------------------------- |--------------------| +| `http_request_duration_seconds` | histogram | Seconds spent serving HTTP requests | `method`, `path`, `status` | Community Edition | +| `http_requests_total` | counter | The total number of HTTP requests | `method`, `path`, `status` | Community Edition | +| `http_requests_in_flight` | gauge | The number of HTTP requests in flight | `method`, `path`, `status` | Pro Edition | The `http_request*` metrics record the latency and statistics of `/system/*` routes to monitor the OpenFaaS gateway and its provider. The `/async-function` route is also recorded in these metrics to observe asynchronous ingestion rate and latency. -### Minimum watchdog versions - -The metrics endpoint was added in the following versions and is enabled automatically. - -* watchdog: 0.13.0 -* of-watchdog 0.5.0 diff --git a/docs/contributing/get-started.md b/docs/contributing/get-started.md index 154ea566..618fbed9 100644 --- a/docs/contributing/get-started.md +++ b/docs/contributing/get-started.md @@ -8,14 +8,14 @@ Examples may be: writing some Go code, reviewing a PR, helping with the website ## Five practical ideas to get started -* Try the [workshop](https://github.com/openfaas/workshop) +* Try the samples in [Serverless For Everyone Else](https://gumroad.com/l/serverless-for-everyone-else) + +* Write a blog post and add it to the [community file](https://github.com/openfaas/faas/blob/master/community.md) * Read the [architecture diagrams](https://docs.openfaas.com/architecture/gateway/) * Submit a function to the [Function Store](https://github.com/openfaas/store) -* Write a blog post and add it to the [community file](https://github.com/openfaas/faas/blob/master/community.md) - * Improve the [OpenFaaS CLI tooling](https://github.com/openfaas/faas-cli) with a PR or documentation ## Get up to speed diff --git a/docs/index.md b/docs/index.md index aef5a7bc..1a794501 100644 --- a/docs/index.md +++ b/docs/index.md @@ -8,72 +8,53 @@ OpenFaaS® makes it easy for developers to deploy event-driven functions and ## Highlights -* Open source functions framework - run it on any cloud without fear of lock-in +* Open source functions framework - run functions on any cloud without fear of lock-in * Write functions in any language and package them in Docker/OCI-format containers * Easy to use - built-in UI, powerful CLI and one-click installation * Scale as you go - handle spikes in traffic, and scale down when idle * Active community - contribute and belong +* Community Edition for developers, [Pro edition & support for production](/openfaas-pro/introduction/) ![Stack](https://github.com/openfaas/faas/raw/master/docs/of-layer-overview.png) See also: [Tech stack & layers](/architecture/stack/) & [Preparing for production](/architecture/production/) -## Self-service training +## Get started -* **The Official Manual for OpenFaaS**: Serverless For Everyone Else +Start out with one of the options from our self-service training range: - In [Serverless For Everyone Else](https://gumroad.com/l/serverless-for-everyone-else), you'll gain a working knowledge of the value and use-cases of serverless functions. You'll then build your own with Node.js and learn how to secure and monitor them. Learn directly from the OpenFaaS Founder. +* [See our Official Training page](/tutorial/training/) - [![eBook cover](/images/serverless-for-everyone-else.png)](https://gumroad.com/l/serverless-for-everyone-else) - - [Get the eBook and training video on Gumroad](https://gumroad.com/l/serverless-for-everyone-else) - -* Training course from the [LinuxFoundation](LinuxFoundation): Introduction to Serverless on Kubernetes - - This training course "Introduction to Serverless on Kubernetes" written by the project founder and commissioned by the LinuxFoundation provides an overview of what you need to know to build functions and operate OpenFaaS on public cloud. - - Examples are provided in Python. - - Training course: [Introduction to Serverless on Kubernetes](https://www.edx.org/course/introduction-to-serverless-on-kubernetes) - -* [OpenFaaS Blog](https://www.openfaas.com/blog/) - read latest news and tutorials - - Learn about features and capabilities through case-studies and tutorials. - -## Going to production - -OpenFaaS OSS is suitable for developers. OpenFaaS Pro & Enterprise is for production. - -[Contact us](https://openfaas.com/support/) to find out more about OpenFaaS in production - -## Become a Sponsor - -!!! info "How is OpenFaaS funded?" - OpenFaaS is free and open-source. As an end-user, supporter or commercial company, you can become a sponsor and get unique benefits, whilst also supporting the project and community. - -You can access exclusive updates, discounts, news and tutorials through the [The Treasure Trove Portal](https://faasd.exit.openfaas.pro/function/trove/) with over 80 updates from the OpenFaaS Founder going back to 2019. - -* [Become a GitHub Sponsor today](https://github.com/support/) - -### Quickstart +Or go ahead and deploy OpenFaaS straight to Kubernetes/OpenShift or to a VM using faasd: ![Portal](https://github.com/openfaas/faas/raw/master/docs/inception.png) *Pictured: API gateway portal - designed for ease of use* -Deploy OpenFaaS to Kubernetes, OpenShift, or faasd [deployment guides](./deployment/) +* [Deployment guides](./deployment/) ## Video presentations -* [Meet faasd. Look Ma’ No Kubernetes!](https://www.youtube.com/watch?v=ZnZJXI377ak&feature=youtu.be) -* [Getting Beyond FaaS: The PLONK Stack for Kubernetes Developers](https://www.youtube.com/watch?v=NckMekZXRt8&feature=emb_title) -* [Digital Transformation of Vision Banco Paraguay with Serverless Functions @ KubeCon late-2018](https://kccna18.sched.com/event/GraO/digital-transformation-of-vision-banco-paraguay-with-serverless-functions-alex-ellis-vmware-patricio-diaz-vision-banco-saeca) +* [Meet faasd. Look Ma’ No Kubernetes! 2020](https://www.youtube.com/watch?v=ZnZJXI377ak&feature=youtu.be) +* [Getting Beyond FaaS: The PLONK Stack for Kubernetes Developers 2019](https://www.youtube.com/watch?v=NckMekZXRt8&feature=emb_title) +* [Serverless Beyond the Hype - Alex Ellis - GOTO 2018](https://www.youtube.com/watch?v=yOpYYYRuDQ0) +* [How LivePerson is Tailoring its Conversational Platform Using OpenFaaS - Simon Pelczer 2019](https://www.youtube.com/watch?v=bt06Z28uzPA) +* [Digital Transformation of Vision Banco Paraguay with Serverless Functions @ KubeCon 2018](https://kccna18.sched.com/event/GraO/digital-transformation-of-vision-banco-paraguay-with-serverless-functions-alex-ellis-vmware-patricio-diaz-vision-banco-saeca) * [Introducing "faas" - Cool Hacks Keynote at Dockercon 2017](https://blog.docker.com/2017/04/dockercon-2017-mobys-cool-hack-sessions/) ## Community OpenFaaS has a thriving community of Open Source contributors and users. +### Going to production + +!!! info "Do we need the Community Edition or Pro?" + The OpenFaaS Community Edition is suitable for developers. OpenFaaS Pro is built for use in production. + + You can find out more about [OpenFaaS Pro here](/openfaas-pro/introduction) or [contact us to find out more](https://openfaas.com/support/). + +### Have you written a blog post or given a talk? + Have you written a blog about OpenFaaS? Send a Pull Request to the community page below. * [Read blogs/articles and find events about OpenFaaS](https://github.com/openfaas/faas/blob/master/community.md) @@ -84,24 +65,25 @@ Several dozen end-user companies have given permission for their logo to be used If you are using OpenFaaS for internal or production use, please feel free to send a pull request to the ADOPTERS.md file, to email support@openfaas.com or to comment on [this issue](https://github.com/openfaas/faas/issues/776). -### Contributing +### Become a Sponsor -OpenFaaS is written in Golang and contributions are welcomed from end-users and the community. It could mean providing feedback through testing features, proposing enhancements, or getting involved with the maintenance of almost to 40 projects. +!!! info "How is OpenFaaS funded?" + OpenFaaS is free and open-source. As an end-user, supporter or commercial company, you can become a sponsor and get unique benefits, whilst also supporting the project and community. -* View the [contributing page](/community/#contribute) +You can access exclusive updates, discounts, news and tutorials through the [The Treasure Trove Portal](https://faasd.exit.openfaas.pro/function/trove/) with over 80 updates from the OpenFaaS Founder going back to 2019. -If you would like to contribute to the documentation site or find out more check out the [docs repo](https://github.com/openfaas/docs). +* [Become a GitHub Sponsor today](https://github.com/support/) -## Grafana dashboards +### Contributing -Example of a Grafana dashboards linked to OpenFaaS showing auto-scaling live in action: [here](https://grafana.com/dashboards/3526) +OpenFaaS is written in Golang and contributions are welcomed from end-users and the community. It could mean providing feedback through testing features, proposing enhancements, or getting involved with the maintenance of almost to 40 projects. -![Preview of the dashboard](https://pbs.twimg.com/media/C9caE6CXUAAX_64.jpg:large) +* View the [contributing page](/community/#contribute) -An alternative community dashboard is [available here](https://grafana.com/dashboards/3434) +If you would like to contribute to the documentation site or find out more check out the [docs repo](https://github.com/openfaas/docs). ## Governance The core of OpenFaaS is an independent open-source project originally created by [Alex Ellis](https://www.alexellis.io) in 2016. It is now being built and shaped by a [growing community of contributors and end-users](https://www.openfaas.com/team/). -OpenFaaS is hosted by OpenFaaS Ltd (registration: 11076587), a company which also offers commercial services, homepage sponsorships, and support. OpenFaaS ® is a registered trademark in England and Wales. \ No newline at end of file +OpenFaaS is hosted by OpenFaaS Ltd (registration: 11076587), a company which also offers commercial services, homepage sponsorships, and support. OpenFaaS ® is a registered trademark in England and Wales. diff --git a/docs/openfaas-pro/introduction.md b/docs/openfaas-pro/introduction.md index 195f8a91..de384f2c 100644 --- a/docs/openfaas-pro/introduction.md +++ b/docs/openfaas-pro/introduction.md @@ -1,10 +1,12 @@ ## OpenFaaS Pro -OpenFaaS is meant for open-source developers, OpenFaaS Pro is meant for production. +OpenFaaS Pro is a commercially licensed distribution of OpenFaaS with additional features, configurations and commercial support from the founders. -### Additional capabilities +!!! info "Do we need the Community Edition or Pro?" + + OpenFaaS Community Edition (CE) is meant for open-source developers, OpenFaaS Pro is meant for production. -OpenFaaS Pro is a commercially licensed distribution of OpenFaaS with additional features, configurations and support. +### Additional capabilities Eventing: @@ -24,7 +26,7 @@ Service providers and large teams: * [Build functions via REST API](/openfaas-pro/builder) to create your functions from source code, without creating and maintaining hundreds of independent CI jobs. -On our roadmap: +### On our roadmap * A new Pro UI dashboard for managing and monitoring OpenFaaS functions across namespaces * Enhanced RBAC for functions and the OpenFaaS REST API diff --git a/docs/tutorials/featured.md b/docs/tutorials/featured.md index 8cf49a00..20b42f67 100644 --- a/docs/tutorials/featured.md +++ b/docs/tutorials/featured.md @@ -1,5 +1,7 @@ # Featured tutorials +See also: [Official Training resources](/tutorial/training) + ## OpenFaaS deployment guides for Kubernetes - [Amazon EKS](https://www.weave.works/blog/getting-started-with-openfaas-kubernetes-operator-on-eks) @@ -30,14 +32,8 @@ See also: [performance testing](/architecture/performance/) and [going to produc ## Service Mesh +* [Learn how Istio can provide a service mesh for your functions](https://www.openfaas.com/blog/istio-functions/) * [Linkerd2 and OpenFaaS lab](https://github.com/openfaas-incubator/openfaas-linkerd2) for automatic TLS between pods, retries, timeouts and more. -* [Installing Istio and OpenFaaS](https://github.com/stefanprodan/istio-gke/blob/master/docs/openfaas/00-index.md) - instructions written for GKE, but applies to any Kubernetes service. Covers: mTLS, access policies, external traffic access, TLS and canary deployments. - -## Workshop / labs - -* [Official workshop: Kubernetes & Swarm](https://github.com/openfaas/workshop) -* [Linkerd2 and OpenFaaS lab](https://github.com/openfaas-incubator/openfaas-linkerd2) -* [Lab environment - VSCode in the browser with k3s](https://github.com/openfaas-incubator/workshop-vscode) ## ARM / Raspberry Pi diff --git a/docs/tutorials/training.md b/docs/tutorials/training.md new file mode 100644 index 00000000..33d4723b --- /dev/null +++ b/docs/tutorials/training.md @@ -0,0 +1,44 @@ +## Official Training + +### Serverless For Everyone Else + +[Serverless For Everyone Else](https://gumroad.com/l/serverless-for-everyone-else) is available in eBook and video course format and teaches you the fundamentals of: + +* Managing secrets +* Template development +* The OpenFaaS REST API +* Function development in Node.js +* Accessing databases +* Securing and monitoring functions + +No Kubernetes knowledge is required. Also available in a team edition. + +* [Serverless For Everyone Else](https://gumroad.com/l/serverless-for-everyone-else) + +### OpenFaaS and Golang + +Everyday Go is a practical, hands-on guide to writing CLIs, web pages, and microservices in Go. It also features a chapter dedicated to development and testing of functions using OpenFaaS and Go. + +* [Everyday Golang](https://openfaas.gumroad.com/l/everyday-golang) + +### Introduction to Serverless on Kubernetes + +We partnered with the LinuxFoundation to bring you the training course: [Introduction to Serverless on Kubernetes](https://www.openfaas.com/blog/introduction-to-serverless-linuxfoundation/). + +All of the examples are written with Python in mind, which also covers: + +* OpenFaaS on Kubernetes +* Writing functions in Python +* Metrics & monitoring +* Ingress and routing for functions. +* Custom Grafana dashboard + +* [Introduction to Serverless on Kubernetes](https://www.openfaas.com/blog/introduction-to-serverless-linuxfoundation/) + +### Serverless on Kubernetes primer for your developer community + +Book a 2-hour workshop for your development community with Alex Ellis. + +Get an overview of the landscape, available FaaS projects for Kubernetes, custom demos, free copies of Serverless For Everyone Else, and a chance for Q&A. + +Send us an email here for more: [contact@openfaas.com](mailto:contact@openfaas.com) diff --git a/docs/tutorials/workshop.md b/docs/tutorials/workshop.md deleted file mode 100644 index b67b4d48..00000000 --- a/docs/tutorials/workshop.md +++ /dev/null @@ -1,7 +0,0 @@ -## Official Workshop - -The OpenFaaS community has built a set of hands-on labs which you can work through at your own pace to learn how to build Serverless functions with Python. - -It's a good place to start if you're new to the project and want to build a working knowledge of how to start shipping serverless functions. - -* [Start the workshop on GitHub](https://github.com/openfaas/workshop) diff --git a/mkdocs.yml b/mkdocs.yml index 1ae232e2..5a5b386f 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -158,8 +158,9 @@ nav: - Performance: ./architecture/performance.md - FaaS Provider: ./architecture/faas-provider.md - Logs Provider: ./architecture/logs-provider.md + - Training: + - Overview: ./tutorials/training.md - Tutorials: - - Workshop: ./tutorials/workshop.md - Expanded timeouts: ./tutorials/expanded-timeouts.md - CLI with Node.js: ./tutorials/CLI-with-node.md - First Python Function: ./tutorials/first-python-function.md