NGINX https://www.nginx.com The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Thu, 17 Feb 2022 22:28:11 +0000 en-US hourly 1 Deploying NGINX and NGINX Plus with Docker https://www.nginx.com/blog/deploying-nginx-nginx-plus-docker/ Wed, 16 Feb 2022 19:19:04 +0000 https://www.nginx.com/?p=1324 Editor – The NGINX Plus Dockerfiles for Alpine Linux and Debian were updated in February 2022 to reflect the latest software versions. They also (along with the revised instructions) use Docker secrets to pass license information when building an NGINX Plus image. Docker is an open platform for building, shipping, and running distributed applications as containers (lightweight, standalone, executable [...]

Read More...

The post Deploying NGINX and NGINX Plus with Docker appeared first on NGINX.

]]>

Editor – The NGINX Plus Dockerfiles for Alpine Linux and Debian were updated in February 2022 to reflect the latest software versions. They also (along with the revised instructions) use Docker secrets to pass license information when building an NGINX Plus image.

Docker is an open platform for building, shipping, and running distributed applications as containers (lightweight, standalone, executable packages of software that include everything needed to run an application). Containers can in turn be deployed and orchestrated by container orchestration platforms such as Kubernetes. (In addition to the Docker container technology discussed in this blog, NGINX provides the F5 NGINX Ingress Controller in NGINX Open Source‑based and NGINX Plus-based versions; for NGINX Plus subscribers, support is included at no extra cost.)

As software applications, NGINX Open Source and F5 NGINX Plus are great use cases for Docker, and we publish an NGINX Open Source image on Docker Hub, the repository of Docker images. This post explains how to:

Introduction

The Docker open platform includes the Docker Engine – the open source runtime that builds, runs, and orchestrates containers – and Docker Hub, a hosted service where Dockerized applications are distributed, shared, and collaborated on by the entire development community or within the confines of a specific organization.

Docker containers enable developers to focus their efforts on application “content” by separating applications from the constraints of infrastructure. Dockerized applications are instantly portable to any infrastructure – laptop, bare‑metal server, VM, or cloud – making them modular components that can be readily assembled and reassembled into fully featured distributed applications and continuously innovated on in real time.

For more information about Docker, see Why Docker? or the full Docker documentation.

Using the NGINX Open Source Docker Image

You can create an NGINX instance in a Docker container using the NGINX Open Source image from Docker Hub.

Let’s start with a very simple example. To launch an instance of NGINX running in a container and using the default NGINX configuration, run this command:

# docker run --name mynginx1 -p 80:80 -d nginx
fcd1fb01b14557c7c9d991238f2558ae2704d129cf9fb97bb4fadf673a58580d

This command creates a container named mynginx1 based on the NGINX image. The command returns the long form of the container ID, which is used in the name of log files; see Managing Logging.

The -p option tells Docker to map the port exposed in the container by the NGINX image – port 80 – to the specified port on the Docker host. The first parameter specifies the port in the Docker host, while the second parameter is mapped to the port exposed in the container.

The -d option specifies that the container runs in detached mode, which means that it continues to run until stopped but does not respond to commands run on the command line. In the next section we explain how to interact with the container.

To verify that the container was created and is running, and to see the port mappings, we run docker ps. (We’ve split the output across multiple lines here to make it easier to read.)

# docker ps
CONTAINER ID  IMAGE         COMMAND               CREATED         ...  
fcd1fb01b145  nginx:latest  "nginx -g 'daemon of  16 seconds ago  ... 

    ... STATUS          PORTS                NAMES
    ... Up 15 seconds   0.0.0.0:80->80/tcp   mynginx1

The PORTS field in the output reports that port 80 on the Docker host is mapped to port 80 in the container. Another way to verify that NGINX is running is to make an HTTP request to that port. The code for the default NGINX welcome page appears:

# curl http://localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
 body {
 width: 35em;
 margin: 0 auto;
 font-family: Tahoma, Verdana, Arial, sans-serif;
 }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="https://www.nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Working with the NGINX Docker Container

So now we have a working NGINX Docker container, but how do we manage the content and the NGINX configuration? And what about logging?

A Note About SSH

It is common to enable SSH access to NGINX instances, but the NGINX image does not have OpenSSH installed, because Docker containers are generally intended to be for a single purpose (in this case running NGINX). Instead we’ll use other methods supported by Docker.

As an alternative to the following commands, you can run the following command to open an interactive shell to a running NGINX container (instead of starting an SSH session). However, we recommend this only for advanced users.

  • On Alpine Linux systems:

    # docker exec -it <NGINX_container_ID> sh
  • On Debian systems:

    # docker exec -it <NGINX_container_ID> bash

Managing Content and Configuration Files

There are several ways you can manage both the content served by NGINX and the NGINX configuration files. Here we cover a few of the options.

Option 1 – Maintain the Content and Configuration on the Docker Host

When the container is created we can tell Docker to mount a local directory on the Docker host to a directory in the container. The NGINX image uses the default NGINX configuration, which uses /usr/share/nginx/html as the container’s root directory and puts configuration files in /etc/nginx. For a Docker host with content in the local directory /var/www and configuration files in /var/nginx/conf, run this command (which appears on multiple lines here only for legibility):

# docker run --name mynginx2 --mount type=bind source=/var/www,target=/usr/share/nginx/html,readonly --mount type=bind,source=/var/nginx/conf,target=/etc/nginx/conf,readonly -p 80:80 -d nginx

Now any change made to the files in the local directories /var/www and /var/nginx/conf on the Docker host are reflected in the directories /usr/share/nginx/html and /etc/nginx in the container. The readonly option means these directories can be changed only on the Docker host, not from within the container.

Option 2 – Copy Files from the Docker Host

Another option is to have Docker copy the content and configuration files from a local directory on the Docker host during container creation. Once a container is created, the files are maintained by creating a new container when files change or by modifying the files in the container. A simple way to copy the files is to create a Dockerfile with commands that are run during generation of a new Docker image based on the NGINX image from Docker Hub. For the file‑copy (COPY) commands in the Dockerfile, the local directory path is relative to the build context where the Dockerfile is located.

In our example, the content is in the content directory and the configuration files are in the conf directory, both subdirectories of the directory where the Dockerfile is located. The NGINX image includes default NGINX configuration files as /etc/nginx/nginx.conf and /etc/nginx/conf.d/default.conf. Because we instead want to use the configuration files from the host, we include a RUN command that deletes the default files:

FROM nginx
RUN rm /etc/nginx/nginx.conf /etc/nginx/conf.d/default.conf
COPY content /usr/share/nginx/html
COPY conf /etc/nginx

We create our own NGINX image by running the following command from the directory where the Dockerfile is located. Note the period (“.”) at the end of the command. It defines the current directory as the build context, which contains the Dockerfile and the directories to be copied.

# docker build -t mynginx_image1 .

Now we run this command to create a container called mynginx3 based on the mynginx_image1 image:

# docker run --name mynginx3 -p 80:80 -d mynginx_image1

If we want to make changes to the files in the container, we use a helper container as described in Option 3.

Option 3 – Maintain Files in the Container

As mentioned in A Note About SSH, we can’t use SSH to access the NGINX container, so if we want to edit the content or configuration files directly we have to create a helper container that has shell access. For the helper container to have access to the files, we must create a new image that has the proper Docker data volumes defined for the image. Assuming we want to copy files as in Option 2 while also defining volumes, we use the following Dockerfile:

FROM nginx
RUN rm /etc/nginx/nginx.conf /etc/nginx/conf.d/default.conf
COPY content /usr/share/nginx/html
COPY conf /etc/nginx
VOLUME /usr/share/nginx/html
VOLUME /etc/nginx

We then create the new NGINX image by running the following command (again note the final period):

# docker build -t mynginx_image2 .

Now we run this command to create an NGINX container (mynginx4) based on the mynginx_image2 image:

# docker run --name mynginx4 -p 80:80 -d mynginx_image2

We then run the following command to start a helper container mynginx4_files that has a shell, enabling us to access the content and configuration directories of the mynginx4 container we just created:

# docker run -i -t --volumes-from mynginx4 --name mynginx4_files debian /bin/bash
root@b1cbbad63dd1:/#

The new mynginx4_files helper container runs in the foreground with a persistent standard input (the -i option) and a tty (the -t option). All volumes defined in mynginx4 are mounted as local directories in the helper container.

The debian argument means that the helper container uses the Debian image from Docker Hub. Because the NGINX image also uses Debian (and all of our examples so far use the NGINX image), it is most efficient to use Debian for the helper container, rather than having Docker load another operating system.

The /bin/bash argument means that the bash shell runs in the helper container, presenting a shell prompt that you can use to modify files as needed.

To start and stop the container, run the following commands:

# docker start mynginx4_files
# docker stop mynginx4_files

To exit the shell but leave the container running, press Ctrl+p followed by Ctrl+q. To regain shell access to a running container, run this command:

# docker attach mynginx4_files

To exit the shell and terminate the container, run the exit command.

Managing Logging

You can configure either default or customized logging.

Using Default Logging

The NGINX image is configured to send the main NGINX access and error logs to the Docker log collector by default. This is done by linking them to stdout and stderr respectively; all messages from both logs are then written to the file /var/lib/docker/containers/<container_ID>/<container_ID>-json.log on the Docker host, where <container_ID> is the long‑form ID returned when you create a container. For the initial container we created in Using the NGINX Open Source Docker Image, for example, it is fcd1fb01b14557c7c9d991238f2558ae2704d129cf9fb97bb4fadf673a58580d.

To retrieve the container ID for an existing container, run this command, where <container_name> is the value set by the --name parameter when the container is created (for the container ID above, for example, it is mynginx1):

# docker inspect --format '{{ .Id }}' <container_name>

Although you can view the logs by opening the <container_ID>-json.log file directly, it is usually easier to run this command:

# docker logs <container_name>

You can use also the Docker Engine API to extract the log messages, by issuing a GET request against the Docker Unix socket. This command returns both the access log (represented by stdout=1) and the error log (stderr=1), but you can request them singly as well:

curl --unix-socket /var/run/docker-sock http://localhost/containers/<container_name>/logs?stdout=1&stderr=1

To learn about other query parameters, see the Docker Engine API documentation (search for “Get container logs” on that page).

Using Customized Logging

If you want to implement another method of log collection, or if you want to configure logging differently in certain configuration blocks (such as server{} and location{}), define a Docker volume for the directory or directories in which to store the log files in the container, create a helper container to access the log files, and use whatever logging tools you like. To implement this, create a new image that contains the volume or volumes for the logging files.

For example, to configure NGINX to store log files in /var/log/nginx/log, we can start with the Dockerfile from Option 3 and simply add a VOLUME definition for this directory:

FROM nginx
RUN rm /etc/nginx/nginx.conf /etc/nginx/conf.d/default.conf
COPY content /usr/share/nginx/html
COPY conf /etc/nginx
VOLUME /var/log/nginx/log

We can then create an image as described above and use it to create an NGINX container and a helper container that have access to the logging directory. The helper container can have any desired logging tools installed.

Controlling NGINX

Since we do not have direct access to the command line of the NGINX container, we cannot use the nginx command to control NGINX. Fortunately we can use signals to control NGINX, and Docker provides the kill command for sending signals to a container.

To reload the NGINX configuration, run this command:

# docker kill -s HUP <container_name>

To restart NGINX, run this command to restart the container:

# docker restart <container_name>

Deploying NGINX Plus with Docker

So far we have discussed Docker for NGINX Open Source, but you can also use it with the commercial product, NGINX Plus. The difference is you first need to create an NGINX Plus image, because as a commercial offering NGINX Plus is not available at Docker Hub. Fortunately, this is quite easy to do.

Note: Never upload your NGINX Plus images to a public repository such as Docker Hub. Doing so violates your license agreement.

Creating a Docker Image of NGINX Plus

To generate an NGINX Plus image, first create a Dockerfile. The examples we provide here use Alpine Linux 3.15 and Debian 11 (Bullseye) as the base Docker images. Before you can create the NGINX Plus Docker image, you have to download your version of the nginx-repo.crt and nginx-repo.key files. NGINX Plus customers can find them at the customer portal; if you are doing a free trial of NGINX Plus, they were provided with your trial package. Copy the files to the directory where the Dockerfile is located (the Docker build context).

As with NGINX Open Source, by default the NGINX Plus access and error logs are linked to the Docker log collector. No volumes are specified, but you can add them if desired, or each Dockerfile can be used to create base images from which you can create new images with volumes specified, as described previously.

We purposely do not specify an NGINX Plus version in the sample Dockerfiles, so that you don’t have to edit the file when you update to a new release of NGINX Plus. We have, however, included commented versions of the relevant instructions for you to uncomment if you want to make the file version‑specific.

Similarly, we’ve included instructions (commented out) that install official dynamic modules for NGINX Plus.

By default, no files are copied from the Docker host as a container is created. You can add COPY definitions to each Dockerfile, or the image you create can be used as the basis for another image as described above.

NGINX Plus Dockerfile (Debian 11)

NGINX Plus Dockerfile (Alpine Linux 3.15)

Creating the NGINX Plus Image

With the Dockerfile, nginx-repo.crt, and nginx-repo.key files in the same directory, run the following command there to create a Docker image called nginxplus (as before, note the final period):

# DOCKER_BUILDKIT=1 docker build --no-cache -t nginxplus --secret id=nginx-crt,src=</path/to/your/nginx-repo.crt> --secret id=nginx-key,src=</path/to/your/nginx-repo.key> .

The DOCKER_BUILDKIT=1 flag indicates that we are using Docker BuildKit to build the image, as required when including the --secret option which is discussed below.

The --no-cache option tells Docker to build the image from scratch and ensures the installation of the latest version of NGINX Plus. If the Dockerfile was previously used to build an image and you do not include the --no-cache option, the new image uses the version of NGINX Plus from the Docker cache. (As noted, we purposely do not specify an NGINX Plus version in the Dockerfile so that the file does not need to change at every new release of NGINX Plus.) Omit the --no-cache option if it’s acceptable to use the NGINX Plus version from the previously built image.

The --secret option passes the certificate and key for your NGINX Plus license to the Docker build context without risking exposing the data or having the data persist between Docker build layers. The values of the id arguments cannot be changed without altering the base Dockerfile, but you need to set the src arguments to the path to your NGINX Plus certificate and key files (the same directory where you are building the Docker image if you followed the previous instructions).

Output like the following from the docker images nginxplus command indicates that the image was created successfully:

# docker images nginxplus
REPOSITORY  TAG     IMAGE ID      CREATED        VIRTUAL SIZE
nginxplus   latest  ef2bf65931cf  6 seconds ago  91.2 MB

To create a container named mynginxplus based on this image, run this command:

# docker run --name mynginxplus -p 80:80 -d nginxplus

You can control and manage NGINX Plus containers in the same way as NGINX Open Source containers.

Summary

NGINX, NGINX Plus, and Docker work extremely well together. Whether you use the NGINX Open Source image from Docker Hub or create your own NGINX Plus image, you can easily spin up new instances of NGINX and NGINX Plus in Docker containers and deploy them in your Kubernetes environment. You can also easily create new Docker images from the base images, making your containers even easier to control and manage. Make sure that all NGINX Plus instances running in your Docker containers are covered by your subscription. For details, please contact the NGINX sales team.

There is much more to Docker than we have been able to cover in this article. For more information, download our free O’Reilly eBook – Container Networking: From Docker to Kubernetes – or check out www.docker.com.

The post Deploying NGINX and NGINX Plus with Docker appeared first on NGINX.

]]>
Announcing NGINX Plus R26 https://www.nginx.com/blog/nginx-plus-r26-released/ Tue, 15 Feb 2022 15:00:11 +0000 https://www.nginx.com/?p=68894 We’re happy to announce the availability of NGINX Plus Release 26 (R26). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R26 include: Faster JWT validation with JSON Web Key Set caching – Continuing the series [...]

Read More...

The post Announcing NGINX Plus R26 appeared first on NGINX.

]]>
p.indent { margin-left: 20px; }


We’re happy to announce the availability of NGINX Plus Release 26 (R26). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway.

New and enhanced features in NGINX Plus R26 include:

  • Faster JWT validation with JSON Web Key Set caching – Continuing the series of enhancements to JSON Web Tokens (JWT) support added over the last few releases, we introduce in‑memory caching of JSON Web Key Sets (JWKS), which substantially reduces overhead for JWT validation.
  • Hardened TLS handshakes – NGINX Plus rejects the TLS handshake if the client proposes a communication protocol via ALPN that doesn’t match the NGINX configuration context for the session being established (for example proposes IMAP to a virtual server in the http{} context).
  • Enhancements to the NGINX JavaScript module – Asynchronous functions that use the async and await keywords and Promise object are now supported, and we have implemented the WebCrypto API for cryptographic operations (like generating random numbers or encrypting cookies).

Rounding out this release are support for the IBM System Z (s390x) architecture, the ability to close each direction of a TCP connection independently, and support for version 2 of the Perl Compatible Regular Expression (PCRE) library.

Important Changes in Behavior

NGINX JavaScript Module No Longer Supports js_include

As announced at the release of NGINX Plus R23, in version 0.4.0 of the NGINX JavaScript module the js_import directive replaced the js_include directive. The js_include directive was deprecated at that time and as of this release is no longer supported.

Before upgrading to NGINX Plus R26, replace js_include with js_import in NGINX configuration files and also add an export statement to JavaScript files for functions that are referenced in NGINX configuration. Follow these steps:

  1. Edit NGINX configuration files:

    • Replace js_include with js_import and make a note of the implicit module_name (the JavaScript filename parameter to the directive, without the .js extension).

    • In each directive that references a JavaScript function, prefix the function name with the module_name. The function name is the first parameter to these directives:

      It is the second parameter to the js_set[HTTP][Stream] directive.

    For example, change:

    js_set $foo myFunction;

    to:

    js_set $foo module_name.myFunction;
  2. Edit the JavaScript (module_name.js) files that define functions referenced in an NGINX configuration file. Add export statement like the following to each file, naming the referenced functions.

    export default { myFunction, otherFunction }

    The export statement can appear anywhere in the .js file, but by convention it is placed either directly above the functions or at the end of the file.

Cookie-Flag Module Is Obsolete

The third‑party Cookie‑Flag module was deprecated in NGINX Plus R23 and as announced at that time is no longer available in the NGINX modules repository as of this release.

Before upgrading to NGINX Plus R26, edit your NGINX configuration to replace any occurrences of the set_cookie_flag directive (defined in the deprecated module) with the built‑in proxy_cookie_flags directive.

TLS Negotiation No Longer Supports the NPN Protocol

The way that NGINX establishes TLS and HTTP/2 connections has been updated. As part of the TLS handshake between NGINX and a client (usually a browser), they negotiate which communication protocol will be used in the session established by the handshake (most often, the negotiation upgrades the session from HTTP 1.x to HTTP/2). The Next Protocol Negotiation (NPN) extension to TLS was the first method used for this purpose, but NPN is now considered obsolete and is superseded by the Application‑Layer Protocol Negotiation (ALPN) extension, published as RFC 7301.

NGINX Plus R26 no longer supports NPN, so clients must now use ALPN exclusively.

In addition, our ALPN implementation has been extended and hardened – see Hardened TLS Handshakes.

The Old NGINX Plus Software Repository Is No Longer Updated

At the release of NGINX Plus R24, the package repositories for all NGINX software were reorganized, resulting in changes to the NGINX Plus installation procedure.

When you install or upgrade NGINX Plus, the operating system’s package manager (apt, yum, or equivalent) is configured with the software repository for NGINX Plus.

If upgrading to NGINX Plus R26 on an existing system configured to use the old plus-pkgs.nginx.com repo (those running NGINX Plus R23 or earlier), you must update the package manager to refer to the new pkgs.nginx.com/plus repo. See the instructions in the F5 Knowledge Base.

If performing an initial installation of NGINX Plus R26, see Installing NGINX Plus in the NGINX Plus Admin Guide.

NGINX Plus R26 is not available in the old repository, which will not receive further updates.

Changed Access to the OpenAPI Spec for the NGINX Plus API

The NGINX Plus software package no longer includes the YAML‑format OpenAPI specification and Swagger UI for the NGINX Plus API. You can now access them in the NGINX Plus Admin Guide.

Changes to Platform Support

New operating systems and architectures supported:

Older operating systems removed:

  • Alpine Linux 3.11 (oldest supported version is 3.12)

Older operating systems and architectures deprecated and scheduled for removal in NGINX Plus R27:

  • Power8 architecture (ppc64le)
  • CentOS 8.1+
  • Alpine Linux 3.12

New Features in Detail

Faster JWT Validation with JSON Web Key Set Caching

When validating JSON Web Tokens, NGINX uses a JSON Web Key Set (JWKS) to verify the token’s signature or decrypt the token. JWKSs can either be stored in configuration files or obtained from external services via an HTTP request. Additionally caching a JWKS in memory has several benefits:

  • Significant reduction in CPU usage
  • Reduced request latency
  • Streamlining of JWT validation, because as part of the caching process JWKS keys are converted from JSON into a binary format that is optimized for cryptographic operations

To cache JWKSs in memory, include the new auth_jwt_key_cache directive and specify the expiration time for each key set (in this example, 3 hours):

When JWKs are obtained from an external server, we also recommend configuring standard content caching and including the proxy_cache_use_stale directive, which tells NGINX Plus to continue serving an expired JWKS while it’s being refreshed in the background.

The benefits of content caching in addition to JWKS caching are twofold:

  • Resiliency – The JWKS can be retrieved from the cache even when it has expired. This increases resiliency when the JWKS provider is temporarily unavailable, but there is a tradeoff of increased security risk.

  • Effect on the authorization server – Expiration of a cached JWKS affects the auth server differently depending on whether JWKS caching is used alone or in combination with content caching:

    • With JWKS caching alone, all incoming authorization requests are forwarded to the auth server until the cache is repopulated with a new version of the expired JWKS. If the auth server responds slowly, there can be a sudden increase in repeated HTTP requests for the JWKS. This extra load might overwhelm the auth service, making the problem worse.

    • When content caching is enabled with serving of expired JWKSs, only one request for the JWKS is forwarded to the auth server, with subsequent requests queued until NGINX can satisfy them after the content cache is populated. This results in lower demand (and thus lower resource consumption) on the auth service.

Hardened TLS Handshakes

Attacks against TLS, such as ALPACA, are increasing. As part of our ongoing commitment to proactively defending against exploits, we have hardened NGINX’s handling of TLS connections.

Application‑Layer Protocol Negotiation (ALPN) is an optional extension to the TLS handshake, used by the client and server during the TLS handshake to choose the Layer 7 protocol they will use in the encrypted session established by the handshake. The most common use case for ALPN is to negotiate the upgrade from HTTP/1.x to HTTP/2 for the session between a browser and a web or app server.

NGINX Plus now rejects a TLS handshake if the client proposes a protocol via ALPN that doesn’t match the NGINX configuration context of the session being established. For example, a virtual server defined in the http{} context requires an ALPN protocol ID for HTTP, while a virtual server in the mail{} context requires a protocol ID for SMTP, POP, or IMAP.

NGINX Plus R26 introduces the $ssl_alpn_protocol[HTTP][Stream] variable to capture the negotiated protocol. (The $ssl_preread_alpn_protocols variable introduced in the stream{} context in NGINX Plus R15 still captures the list of all protocols advertised by the client during the handshake.)

This snippet defines the alpn log format which uses $ssl_alpn_protocol to include the protocol in the alpn= field of entries in the access log.

The new ssl_alpn directive in the stream{} context defines which protocols NGINX Plus accepts. Omit the directive to enable NGINX Plus to consider all protocols presented by the client.

NGINX JavaScript Module Enhancements

NGINX Plus R26 incorporates version 0.7.2 of the NGINX JavaScript module (njs) and includes two enhancements:

Note: This section assumes you understand the JavaScript constructs for asynchronous and cryptographic operations. A full analysis of the code snippets is outside the scope of this blog.

Enhanced Support for Asynchronous Functions

In many commonly used scripting languages like PHP, commands and functions execute synchronously – that is, after a script invokes a function it pauses (stops executing) until the function returns a result.

JavaScript can also operate asynchronously: when a function is invoked asynchronously the script continues executing without waiting for the result to return from the function.

Take this sample script:

It returns an empty response because the njs runtime does not wait for the defined timeouts to elapse (if it did wait, the output would be b,a):

$ curl http://127.0.0.1/
$ 

Handling asynchronous operations correctly is obviously crucial to getting the intended result. JavaScript provides a number of ways to do this, but in the common NGINX use cases, it’s often desirable simply to wrap an asynchronous function in a way that makes the execution flow synchronous. This is where the Promise object and async and await keywords come into play.

ECMAScript 6 (the sixth edition of the ECMA‑262 language specification for JavasScript) defines the Promise object as a return type for asynchronous functions. It exists in one of three states:

  • fulfilled – The operation completed successfully
  • rejected – The operation failed
  • pending – The initial state (neither fulfilled or rejected)

Defining a JavaScript function with the keyword async sets the function’s return type to Promise. The async and await keywords are important when you are writing njs functions dealing with Promise objects.

Take this example:

The fs.readFile function (line 12) returns a Promise. It is wrapped in a custom async function that ensures that fs.readFile() is invoked only if the file is called user.text. Because of the await keyword, the wrapping function then waits for the Promise and returns the data.

Wrapping fs.readFile() in another function makes it easier to catch errors; any exception in the async function sets the state of the Promise to rejected. Another way to do this is to replace line 9 with a statement that returns a rejected Promise:

You can also work directly with Promise objects. In the following example, the Promise.resolve functions return a Promise for each of p1 and p2. The Promise.all function waits for the promises for both p1 and p2 to be resolved before returning a result.

Now the output from our curl command is what we want (note that b is retuned first due to the shorter timeout value):

$ curl http://127.0.0.1/
b,a
$ 

New Cryptographic Functions with the WebCrypto API

NGINX JavaScript now has access to enhanced cryptographic capabilities via the WebCrypto API. Common njs cryptographic use cases include:

  • Generating secure random numbers for session IDs
  • Encrypting and decrypting messages, data, and cookies
  • Creating or validating digital signatures using symmetric as well as asymmetric crypto algorithms

This njs code generates a random number:

And this NGINX Plus configuration invokes the njs code:

The output of the function is a random number something like this:

$ curl 127.0.0.1
23225320050,3668407277,1101267190,2061939102,2687933029,2361833213,32543985,4162087386

The getRandomValues function in WebCrypto is a great entry point to get started with secure random numbers and WebCrypto in general. Its implementation is quite simple, and the function returns results directly, as opposed to returning a Promise.

Some of the other more intensive WebCrypto cryptographic functions operate asynchronously, however. For example, the documentation for сrypto.subtle.digest() states:

Generates a digest of the given data. Takes as its arguments an identifier for the digest algorithm to use and the data to digest. Returns a Promise which will be fulfilled with the digest.

Calling сrypto.subtle.digest() directly, therefore, does not guarantee that its result will be available to the next step, unless it’s wrapped in an async function. So here we wrap it in a function with the async and the await keywords to ensure that the hash variable is populated with a result before the function returns:

The js_set directive in this NGINX Plus configuration populates the $hosthash variable with the value returned by the setReturnValue function (as wrapped in the host_hash function):

Here’s an example that hashes the hostname example.com.

$ curl -H "Host: example.com" 127.1
#
e8e624a82179b53b78364ae14d14d63dfeccd843b026bc8d959ffe0c39fc4ded1f4dcf4c8ebe871e657a12db6f11c3af87c9a1d4f2b096ba3deb56596f06b6f4

Other Enhancements in NGINX Plus R26

Support for the IBM Z (s390x) Architecture

As modern applications colonize every available digital biome, it’s important that the essential life‑support components – like NGINX – travel with them, so we’re pleased to support NGINX Plus on the IBM Z (s390x) architecture with CentOS 8.1+, RHEL 8.1+, and Ubuntu 20.04. Organizations looking to host modern applications on their existing mainframe assets can now deploy NGINX and NGINX Plus as a software‑based web server, load balancer, reverse proxy, content cache, and API gateway.

TCP Half-Close Support in the Stream Module

The new proxy_half_close directive enables independent closure of each direction of a TCP connection, for extra efficiency in stream{} contexts.

PCRE2 Library Support

Previous versions of NGINX Plus use the Perl Compatible Regular Expression (PCRE) library (version 1) to evaluate regular expressions used in NGINX configuration. This significant open source project has recently reached end of life, superseded by PCRE2. NGINX Plus now supports both PCRE and PCRE2, automatically using the version available with the underlying operating system. No configuration changes are required.

Upgrade or Try NGINX Plus

If you’re running NGINX Plus, we strongly encourage you to upgrade to NGINX Plus R26 as soon as possible. You’ll also pick up several additional fixes and improvements, and it will help NGINX to help you when you need to raise a support ticket.

If you haven’t tried NGINX Plus, we encourage you to try it out – for security, load balancing, and API gateway, or as a fully supported web server with enhanced monitoring and management APIs. You can get started today with a free 30-day trial.

The post Announcing NGINX Plus R26 appeared first on NGINX.

]]>
Secure Your gRPC Apps Against Severe DoS Attacks with NGINX App Protect DoS https://www.nginx.com/blog/secure-grpc-apps-against-severe-dos-attacks-nginx-app-protect-dos/ Thu, 10 Feb 2022 23:31:07 +0000 https://www.nginx.com/?p=68882 Customer demand for goods and services over the past two years has underlined how crucial it is for organizations to scale easily and innovate faster, leading many of them to accelerate the move from a monolithic to a cloud‑native architecture. According to the recent F5 report, The State of Application Strategy 2021, the number of organizations [...]

Read More...

The post Secure Your gRPC Apps Against Severe DoS Attacks with NGINX App Protect DoS appeared first on NGINX.

]]>
Customer demand for goods and services over the past two years has underlined how crucial it is for organizations to scale easily and innovate faster, leading many of them to accelerate the move from a monolithic to a cloud‑native architecture. According to the recent F5 report, The State of Application Strategy 2021, the number of organizations modernizing applications increased 133% in the last year alone. Cloud‑enabled applications are designed to be modular, distributed, deployed, and managed in an automated way. While it’s possible simply to lift-and-shift an existing monolithic application, doing so provides no advantage in terms of costs or flexibility. The best way to benefit from the distributed model that cloud computing services afford is to think modular – enter microservices.

Microservices is an architectural approach that enables developers to build an application as a set of lightweight application services that are structurally independent of each other and the underlying platform. Each microservice runs as a unique process and communicates with other services through well‑defined and standardized APIs. Each service can be developed and deployed by a small independent team. This flexibility provides organizations greater benefits in terms of performance, cost, scalability, and the ability to quickly innovate.

Developers are always looking for ways to increase efficiency and expedite application deployment. APIs enable software-to-software communication and provide the building blocks for development. To request data from servers using HTTP, web developers originally used SOAP, which sends details of the request in an XML document. However, XML documents tend to be large, require substantial overhead, and take a long time to develop.

Many developers have since moved to REST, an architectural style and set of guidelines for creating stateless, reliable web APIs. A web API that obeys these guidelines is called RESTful. RESTful web APIs are typically based on HTTP methods to access resources via URL‑encoded parameters and use JSON or XML to transmit data. With the use of RESTful APIs, applications are quicker to develop and incur less overhead.

Advances in technology bring new opportunities to advance application design. In 2015 Google developed Google Remote Procedure Call (gRPC) as a modern open source RPC framework that can run in any environment. While REST is built on the HTTP 1.1 protocol and uses a request‑response communication model, gRPC uses HTTP/2 for transport and a client‑response model of communication, implemented in protocol buffers (protobuf) as the interface description language (IDL) used to describe services and data. Protobuf is used to serialize structured data and is designed for simplicity and performance. gRPC is approximately 7 times faster than REST when receiving data and 10 times faster when sending data, due to the efficiency of protobuf and use of HTTP/2. gRPC also allows streaming communication and serves multiple requests simultaneously.

Developers find building microservices with gRPC an attractive alternative to using RESTful APIs due to its low latency, support for multiple languages, compact data representation, and real‑time streaming, all of which make it especially suitable for communication among microservices and over low‑power, low‑bandwidth networks. gRPC has increased in popularity because it makes it easier to build new services rapidly and efficiently, with greater reliability and scalability, and with language independence for both clients and servers.

Although the open nature of the gRPC protocol offers several positive benefits, the standard doesn’t provide any protection from the impact that a DoS attack can have on an application. A gRPC application can still suffer from the same types of DoS attacks as a traditional application.

Why Identifying a DoS Attack on a gRPC App Is Challenging

While microservices and containers give developers more freedom and autonomy to rapidly release new features to customers, they also introduce new vulnerabilities and opportunities for exploitation. One type of cyberattack that has gained in popularity is denial-of-service (DoS) attacks, which in recent years have been responsible for an increasing number of common vulnerabilities and exposures (CVEs), many caused by the improper handling of gRPC requests. Layer 7 DoS attacks on applications and APIs have spiked by 20% in recent years while the scale and severity of impact has risen by nearly 200%.

A DoS attack commonly sends large amounts of traffic that appears legitimate, to exhaust the application’s resources and make it unresponsive. In a typical DoS attack, a bad actor floods a website or application with so much traffic that the servers become overwhelmed by all the requests, causing them to stall or even crash. DoS attacks are designed to slow or completely disable machines or networks, making them inaccessible to the people who need them. Until the attack is mitigated, services that depend on the machine or network – such as e‑commerce sites, email, and online accounts – are unusable.

Increasingly, we have seen more DoS attacks using HTTP and HTTP/2 requests or API calls to attack at the application layer (Layer 7), in large part because Layer 7 attacks can bypass traditional defenses that are not designed to defend modern application architectures. Why have attackers switched from traditional volumetric attacks at the network layers (Layers 3 and 4) to Layer 7 attacks? They are following the path of least resistance. Infrastructure engineers have spent years building effective defense mechanisms against Layer 3 and Layer 4 attacks, making them easier to block and less likely to be successful. That makes such attacks more expensive to launch, in terms of both money and time, and so attackers have moved on.

Detecting DoS attacks on gRPC applications is extremely hard, especially in modern environments where scaling out is performed automatically. A gRPC service may not be designed to handle high‑volume traffic which makes it an easy target for attackers to take down. gRPC services are also vulnerable to HTTP/2 flood attacks with tools such as h2load. Additionally, gRPC services can easily be targeted when the attacker exploits data definitions that are properly declared in a protobuf specification.

A typical, if unintentional, misuse of a gRPC service is when a bug in a script causes it to produce excessive requests to the service. For example, suppose an automation script issues an API call when a certain condition occurs, which the designers expect to happen every two seconds. Due to a mistake in the definition of the condition, however, the script issues the call every two milliseconds, creating an unexpected burden on the backend gRPC service.

Other examples of DoS attacks on a gRPC application include:

  • The insertion of a malicious field in a gRPC message may cause the application to fail.
  • A Slow POST attack sends partial requests in the gRPC header. Anticipating the arrival of the remainder of the request, the application or server keep the connection open. The concurrent connection pool might become full, causing rejection of additional connection attempts from clients.
  • An HTTP/2 Settings Flood (CVE-2019-9515), in which the attacker sends empty SETTING frames to the gRPC protocol, consumes NGINX resources, making it unable to serve new requests.

Unleash the Power of Dynamic User and Site Behavior Analysis to Mitigate gRPC DoS Attacks with NGINX App Protect DoS

Securing applications from today’s DoS attacks requires a modern approach. To protect complex and adaptive applications, you need a solution that provides highly precise, dynamic protection based on user and site behavior and removes the burden from security teams while supporting rapid application development and competitive advantage.

F5 NGINX App Protect DoS is a lightweight software module for NGINX Plus, built on F5’s market‑leading WAF and behavioral protection. Designed to defend against even the most sophisticated Layer 7 DoS attacks, NGINX App Protect DoS uses unique algorithms to create a dynamic statistical model that provides adaptive machine learning and automated protection. It continuously measures mitigation effectiveness and adapts to changing user and site behavior and performs proactive server health checks. For details, see How NGINX App Protect Denial of Service Adapts to the Evolving Attack Landscape on our blog.

Behavioral analysis is provided for both traditional HTTP apps and modern HTTP/2 app headers. NGINX App Protect DoS mitigates attacks based on both signatures and bad actor identification. In the initial signature‑mitigation phase, NGINX App Protect DoS profiles the attributes associated with anomalous behavior to create dynamic signatures that then block requests that match this behavior going forward. If the attack persists, NGINX App Protect DoS moves into the bad‑actor mitigation phase.

Based on statistical anomaly detection, NGINX App Protect DoS successfully identifies bad actors by source IP address and TLS fingerprints, enabling it to generate and deploy dynamic signatures that automatically identify and mitigate these specific patterns of attack traffic. This approach is unlike traditional DoS solutions on the market that detect when a volumetric threshold is exceeded. NGINX App Protect DoS can block attacks where requests look completely legitimate and each attacker might even generate less traffic than the average legitimate user.

The following Kibana dashboards show how NGINX App Protect DoS quickly and automatically detects and mitigates a DoS flood attack on a gRPC application.

Figure 1 displays a gRPC application experiencing a DoS flood attack. In the context of gRPC, the critical metric is datagrams per second (DPS) which corresponds to the rate of messages per second. In this image, the yellow curve represents the learning process: when the Baseline DPS prediction converges toward the Incoming DPS value (in blue), NGINX App Protect has learned what “normal” traffic for this application looks like. The sharp rise in DPS at 12:25:00 indicates the start of an attack. The red alarm bell indicates the point when NGINX App Protect DoS is confident that there is an attack in progress, and starts the mitigation phases.

Figure 1: A gRPC application experiencing a DoS attack

Figure 2 shows NGINX App Protect DoS in the process of detecting anomalous behavior and thwarting a gRPC DoS flood attack using a phased mitigation approach. The red spike indicates the number of HTTP/2 redirections sent to all clients during the global rate‑mitigation phase. The purple graph represents the redirections sent to specific clients when their requests match a signature that models the anomalous behavior. The yellow graph represents the redirections sent to specific detected bad actors identified by IP address and TLS fingerprint.

Figure 2: NGINX App Protect DoS using a phased mitigation approach to thwart a gRPC DoS flood attack

Figure 3 shows a dynamic signature created by NGINX App Protect DoS that is powered by machine learning and profiles the attributes associated with the anomalous behavior from this gRPC flood attack. The signature blocks requests that match it during the initial signature‑mitigation phase.

Figure 3: A dynamic signature

Figure 4 shows how NGINX App Protect DoS moves from signature‑based mitigation to bad‑actor mitigation when an attack persists. By analyzing user behavior, NGINX App Protect DoS has identified bad actors by the source IP address and TLS fingerprints shown here. Instead of looking at every request for specific signatures that indicate anomalous behavior, here service is denied to specific attackers. This enables generation of dynamic signatures that identify these specific attack patterns and mitigate them automatically.

Figure 4: NGINX App Protect DoS identifying bad actors by IP address and TLS fingerprint

With gRPC APIs, you use the gRPC interface to set security policy in the type library (IDL) file and the proto definition files for the protobuf. This provides a zero‑touch security policy solution – you don’t have to rely on the protobuf definition to protect the gRPC service from attacks. gRPC proto files are frequently used as a part of CI/CD pipelines, aligning security and development teams by automating protection and enabling security-as-code for full CI/CD pipeline integration. NGINX App Protect DoS ensures consistent security by seamlessly integrating protection into your gRPC applications so they that are always protected by the latest, most up-to-date security policies.

While gRPC provides the speed and flexibility developers need to design and deploy modern applications, the inherent open nature of its framework makes it highly vulnerable to DoS attacks. To stay ahead of severe Layer 7 DoS attacks that can result in performance degradation, application and website outages, abandoned revenue, and damage to customer loyalty and brand, you need a modern defense. That’s why NGINX App Protect DoS is essential for your modern gRPC applications.

To try NGINX App Protect DoS with NGINX Plus for yourself, start your free 30-day trial today or contact us to discuss your use cases.

For more information, check out our whitepaper, Securing Modern Apps Against Layer 7 DoS Attacks.

The post Secure Your gRPC Apps Against Severe DoS Attacks with NGINX App Protect DoS appeared first on NGINX.

]]>
Join NGINX at F5 Agility 2022 February 15–16 https://www.nginx.com/blog/join-nginx-f5-agility-2022-february-15-16/ Tue, 08 Feb 2022 20:05:32 +0000 https://www.nginx.com/?p=68870 This year’s edition of F5’s annual customer event, Agility 2022, is taking place February 15 and 16, and NGINX is excited to be part of the lineup! Join other developers and architects, as well as security and networking professionals, to gain insight from industry leaders and digital infrastructure experts, including the NGINX community. F5 has filled this year’s [...]

Read More...

The post Join NGINX at F5 Agility 2022 February 15–16 appeared first on NGINX.

]]>
This year’s edition of F5’s annual customer event, Agility 2022, is taking place February 15 and 16, and NGINX is excited to be part of the lineup! Join other developers and architects, as well as security and networking professionals, to gain insight from industry leaders and digital infrastructure experts, including the NGINX community.

F5 has filled this year’s agenda with many ways to learn and connect to the NGINX/F5 community, including insightful keynotes, new topics for educational sessions, interactive product simulators, live discussion forums, and much more. The event kicks off with a keynote featuring Rob Whiteley, General Manager of the NGINX Product Group at F5, and Kara Sprague, Executive Vice President and General Manager, App Delivery and Enterprise Product Ops (for a preview, see our blog). Then our product experts dive into all the details in the discussion forums, breakout sessions, and 10‑minute “quick hit” videos.

Read on for a summary of sessions sure to be of interest to NGINX users, divided into two thematic tracks: Enable Modern App Delivery at Scale and Secure Digital Experiences. Within the tracks, sessions are ordered by start time so you can plan your personal agenda. (If you also deploy F5 BIG-IP, be sure to check out the Simplify Delivery of Legacy Apps track too.)

Enable Modern App Delivery at Scale

Learn How to Talk Like You Understand Kubernetes

Live Discussion Forum: February 15, 11:00–11:45 a.m. PST
Brian Ehlert – Senior Product Manager, F5
Jenn Gile – Manager, Product Marketing, F5

With the invention of Kubernetes came a host of new concepts and tools, particularly around networking and traffic management. In particular, Ingress controllers and service meshes did not exist prior to the Kubernetes era. Nor were Layer 4 and Layer 7 protocols and traffic typically managed from the same control plane. In this discussion forum, Brian and Jenn provide a “translation guide” for these new concepts and tools so you can have meaningful discussions with your leaders and teams.

What Is a Modern App?

Live Discussion Forum: February 15, 12:00–12:45 p.m. PST
Damian Curry – Business Development Technical Director, F5
Brandon Gulla – CTO and VP, SUSE Rancher Government Solutions
Karthik Krishnaswamy – Director, Product Marketing, F5
Austin Parker – Head of DevRel, Lightstep

Application modernization is not just a buzzword anymore – it’s rapidly gaining mainstream adoption. According to our own The State of Application Strategy in 2021 report, more than three‑quarters of organizations are modernizing their applications, an increase of 133% over 2020. But what exactly is a modern app? Is it just one that’s been shifted to the cloud, or delivered using Kubernetes and service mesh? Or does it entail new thinking around culture and processes such that apps enable business objectives? Join Damian and Karthik for a discussion with Austin and Brandon about how their experiences have informed their definition of modern apps.

Is That Project Ready for You? Open Source Maturity Models

Breakout: February 15, 2:00–2:25 p.m. PST
Dave McAllister – Sr. OSS Technical Evangelist, F5

Open source projects are often driven by the need to scratch a software itch. But while they’re innovative, exciting, and available, how do you know that open source projects are up to the work you need them to perform? Maturity models are one tool that can help you calculate the risk‑reward ratio for a project along dimensions like stability, activity, and support. Dave explains which criteria are important, and discusses the maturity level of NGINX open source projects.

Why You Need a Management Plane for Delivering and Securing Your Apps and APIs

Breakout: February 15, 2:00–2:25 p.m. PST
Chris Witeck – Director, Product Management, F5

With the explosion of applications and APIs, you need a management solution that provides governance for your Platform Ops teams without introducing friction for your DevOps and API owners. What are the core tenets of a management plane? How do you achieve scale and ensure security and reliability for your apps and APIs without slowing down your developers? Join Chris to learn more about how F5 NGINX solutions address these questions and more.

Are You Ready for a Service Mesh?

Breakout: February 15, 4:00–4:25 p.m. PST
Jenn Gile – Manager, Product Marketing, F5
Kate Osborn – Software Engineer, F5

Service mesh is the hot technology featured at just about every cloud‑native conference. But do you actually need one, and if so is your organization ready to implement it? Jenn and Kate share a six‑question checklist that will help you determine readiness and whether your organization will get value from adopting a mesh.

Modern Application Reference Architectures

Breakout: February 15, 4:00–4:25 p.m. PST
Jason Schmidt – Solutions Architect, F5
Elijah Zupancic – Solutions Architect, F5

There are many reference architectures out in the world, but how many can actually be deployed? The F5 NGINX Modern Application Reference Architecture (MARA) project is designed to be both production‑grade and easy to deploy. Join Elijah and Jason for an update on this project and a preview of what’s coming in the future.

Authorization and Authentication for Kubernetes Apps

Breakout: February 16, 12:00–12:25 p.m. PST
Amir Rawdat – Technical Marketing Manager

Authorization and authentication are non‑functional requirements that often get built into apps and services. On a small scale, the amount of complexity added by this practice is manageable if the app isn’t updated frequently. But with faster release velocities at larger scale, integrating authentication and authorization into your apps becomes untenable. In this session, Amir shares how offloading authorization and authentication to an Ingress controller can improve security and use resources more efficiently.

Modern Application Ecosystems

Live Discussion Forum: February 16, 12:00–12:45 p.m. PST
Damian Curry – Business Development Technical Director, F5

As you “shift left” into a more modern application architecture, you find yourself in the middle of a vast and complex ecosystem. When there are multiple solutions to every problem, how do you make the “right” choice? In this panel, Damian discusses some of the key considerations and real life examples of how these decisions are made and how to avoid having them backfire on you.

When Performance Matters: Choosing Between NGINX Ingress Controllers

Breakout: February 16, 2:00–2:25 p.m. PST
Amir Rawdat – Technical Marketing Manager

There are two popular open source Kubernetes Ingress controllers that use NGINX and a commercial version based on NGINX Plus that is developed and supported by F5 NGINX. These projects all started around the same time, have similar names on GitHub, and implement the same function. In this session, Amir explains how they differ and how to choose an Ingress controller that’s best for your needs.

Managing NGINX with NGINX Instance Manager

Quick Hit: On demand (10 minutes)
Robert Haynes  – Technical Marketing Manager, F5

Join Robert for a quick demo of NGINX Instance Manager. You’ll see how easy it can be to track, configure, and monitor all your NGINX Open Source and NGINX Plus instances.

What’s New with the NGINX Controller Application Delivery Module

Quick Hit: On demand (10 minutes)
Ken Araujo – Principal Product Manager, F5

Check out all the latest and greatest developments to the NGINX Controller Application Delivery Module. Ken showcases the new features and shows you why NGINX Controller is the ideal solution for holistic, simplified, app‑centric management of your NGINX app delivery and security services and the apps that they support.

Secure Digital Experiences

Empower Your Organizations’ Digital Transformation with Easy Modern Application Security

Breakout: February 15, 3:00–3:25 p.m. PST
Yaniv Sazman – Manager, Product Management, F5
Daphne Won – Product Manager, F5

Today’s application landscape has changed dramatically. Modern apps are microservices that run in containers, communicate via APIs, and deploy via automated CI/CD pipelines. In this session, Daphne and Yaniv explore how DevOps teams can integrate controls that have been authorized by the security team across distributed environments without slowing release velocity or app performance.

Shifting Left with F5 NGINX

Live Discussion Forum: February 16, 8:00–8:45 a.m. PST
Thelen Blum – Sr. Product Marketing Manager for NGINX App Protect, F5
Fabrizio Fiorucci – Principal Solutions Engineer, F5
Jason McMinn – Infrastructure Architect & Sr. DevOps Engineer, Modern Hire

Gone are the days when security could simply be bolted on at the end of the development process. In today’s world, integrated security must become a normal part of AppDev and DevOps implementations. Instead of a painful extra step that must be dealt with, modern application security can be a robust support system that empowers organizations to reach their business goals and guides them to even greater heights. Join Fabrizio and Thelen in this conversation with Jason about how Modern Hire is securing its modern applications with NGINX App Protect on its journey to a completely cloud‑based environment.

360-Degree Web App and API Protection for Open Banking

Quick Hit: On demand (10 minutes)
Valentin Tobi – Strategic Security Architect, F5

Valentin shows how to utilize F5’s NGINX and Shape technologies to protect an open banking deployment in partnership with the PingFederate solution as an identity provider. Learn how to use NGINX App Protect to secure an Open Banking API with integrated defense against bot attacks, deploy NGINX as an API microgateway, and manage both configurations through a CI/CD pipeline. Valentin also explains how to manage financial aggregators and protect against web scraping using Shape’s aggregator defense.

Deploy NGINX App Protect-Based WAF to AWS in Minutes

Quick Hit: On demand (10 minutes)
Mikhail Fedorov – Security Strategic Architect, F5

Discover a solution that deploys NGINX App Protect‑based WAF to the AWS cloud in minutes. This solution includes a complete set of components for successful WAF deployment: a fully automated data plane, a GitOps‑based control plane, and visibility dashboards. It can be used as a production‑grade WAF deployment or an environment to experience App Protect WAF.

NGINX App Protect DoS Demo

Quick Hit: On demand (10 minutes)
Daniel Edgar – Sr. Technical Product Manager, F5

Looking to quickly and easily integrate advanced application availability protection into your NGINX Plus data plane instances? Join Daniel for a demo of NGINX App Protect DoS that showcases the power of machine learning for securing your modern applications against sophisticated Layer 7 DoS attacks.

Register Today for Agility 2022

Don’t miss a minute of all the great insight and solutions on offer! Register for Agility today. Your confirmation email will include details on how to build your personalized agenda.

The post Join NGINX at F5 Agility 2022 February 15–16 appeared first on NGINX.

]]>
Agility 2022: Compelling Digital Innovations Are Now Table Stakes for the Enterprise https://www.nginx.com/blog/agility-2022-compelling-digital-innovations-now-table-stakes-for-enterprise/ Thu, 03 Feb 2022 23:28:10 +0000 https://www.nginx.com/?p=68850 This February 15 and 16, we’re again hosting our annual customer event virtually, as we have since 2020. Agility 2022 is just one example of how lockdowns during the COVID‑19 pandemic have forced the wholesale shift of commercial and social activities online, turbocharging digital transformation. Digital transformation that simply replicates existing functionality is not enough, however. To capture the [...]

Read More...

The post Agility 2022: Compelling Digital Innovations Are Now Table Stakes for the Enterprise appeared first on NGINX.

]]>
This February 15 and 16, we’re again hosting our annual customer event virtually, as we have since 2020. Agility 2022 is just one example of how lockdowns during the COVID‑19 pandemic have forced the wholesale shift of commercial and social activities online, turbocharging digital transformation. Digital transformation that simply replicates existing functionality is not enough, however. To capture the hearts and minds of customers, enterprises needed to generate compelling digital innovations, both internally and externally.

Take the case of OXXO, a fast‑growing convenience store chain in Mexico. Before the pandemic, the company was opening stores at a blistering pace – roughly one store per day. When customers were no longer allowed in the stores because of lockdowns, OXXO had to switch its business model and provide the same convenience via digital storefronts. The Mi OXXO online and mobile app was born, enabling OXXO to pivot to home delivery. The crisis unlocked rapid‑fire digital innovation and opportunity: OXXO gained a new sales channel, deepening its relationship with existing customers while picking up new digital‑native shoppers.

Empowering Developers to Speed Innovation

But’s what’s the key to rapid‑fire digital innovation that surpasses mere digital transformation? Developer empowerment.

You need to give developers – your secret weapon and superpower – what they need and want. OXXO empowered its developers to quickly build out a compelling and engaging digital experience by providing the infrastructure and tooling that they needed to develop, bring to market, and then operate an online shopping experience at scale.

In short, empowering developers leads to digital innovation.

Like OXXO, companies of all sizes and industries need to empower developers to create the next great digital service, mobile app, or online experience. Empowerment means trust, autonomy, and distributed decision making. Developers must be able to choose the tools of their trade and make key design and service choices for their digital masterpieces – within reason and in a rational framework.

Driving Innovation Comes with Costs

While empowering developers is key to driving innovation, doing so without any constraints is dangerous. It can lead to tool sprawl, an incongruous infrastructure that is hard to maintain and secure, and over time to a disjointed customer experience. Two of the primary risks of unchecked developer empowerment are complexity and cost.

Developers love tools (often open source). But too many tools lead to sprawl, with three or four pieces of technology performing the same task. This makes your architecture significantly more complex, adding management overhead along with operational and security risk. Complexity increases both hard and soft costs, even with free open source tools. (Remember – open source is sometimes free like a puppy.) The more tools you have, the more time your valuable infrastructure and platform teams must spend maintaining them. Onboarding new developers is also more complicated and time‑consuming, because they must become familiar with each tool’s UI (including its quirks), even when it just does the same thing as another tool.

Complexity also leads to potential downtime: more moving parts with their own operational models and toolchains increase the chances things will break under load. Having your app go down involves both reputational risk and financial risk from lost revenue. Rather than wait for you to fix the problem, users may become dissatisfied or go to other providers.

A Happy Medium: Build Guardrails, Not Gates

To avoid these pitfalls, you need to enable developers to be agile without making your apps fragile – to allow them to write code quickly and drive products to market faster while surrounding them with necessary guardrails to ensure your enterprise is not at risk. We call this “running safely with scissors”. As parents we can to try to prevent our children from running with scissors by yelling at them. Or we can acknowledge that they’re going to run with scissors and reduce the risks by buying them scissors with blunted tips. Bonus points if you can also educate them on the risks and get them to follow safety tips like holding scissors with the tips facing downwards.

For developers we need a similar two‑fold approach of allowing them to run while educating them on how to move faster without breaking things or causing undue risks.

Four Guardrails to Empower Developers

At F5, we believe there are four crucial guardrails you can put in place to empower developers while safeguarding against complexity, cost, and security risks:

  1. Shift security left
  2. Embrace modern open source
  3. Deploy infrastructure as code
  4. Automate through self‑service

Shift Security Left

Developers want to code. They want to provide new features and functionality. They don’t necessarily want to spend time and energy on security. But if you can introduce security earlier in the software development lifecycle, you can make security developer‑friendly (enough) that they’re willing to make it part of their workflows. This is referred to as shifting left.

Shifting left is about making developers more responsible for implementing security as they write code – whether it’s threat model assessments, code audits, pen testing, or applying security policies via controls like a web application firewall (WAF). Shifting left is also about making it easier and more intuitive for developers to accomplish these tasks within their existing workflows.

According to a survey by SANS Institute, an organization that specializes in information security and cybersecurity training, fewer than 40% of enterprises have shifted security practices left in a manner consistent with DevSecOps principles. For contrast, consider organizations like Audi. With F5 NGINX App Protect, Audi empowers its application teams to accomplish 80% of required security operations natively within their workflows on the company’s Kubernetes platform.

Embrace Modern Open Source

Developers have an open-source-first mentality. Providing guardrails doesn’t work if you don’t provide open source tooling that fits into their ethos and technology stack preferences. By now, most companies use open source – with Linux, Docker, Kubernetes, and thousands of other tools leading the charge. According to Red Hat’s The State of Enterprise Open Source report for 2021, 90% of IT leaders are using enterprise open source. That said, there’s a vast array of open source possibilities today with many thousands of projects for diverse tasks. So it’s important to prevent tool sprawl by standardizing on a curated set of open source tools (taking into account the wishes and guidance of your developer rockstars, of course!).

For example, French cable TV provider Canal operates three separate open source SQL databases for its Canal+ premium service. But when it came to ingesting data, they wanted a standard. Canal developers initially chose NGINX Open Source to handle the 50,000 requests per second generated by the streaming platform. As Canal+ grew, the platform team seamlessly migrated to F5 NGINX Plus for the additional scale and observability capabilities needed to operate a platform receiving more than a billion clicks per year. Canal doesn’t have to operate multiple proxies and load balancers, yet developers still got to choose a native open source tool without sacrificing operational requirements.

Deploy Infrastructure as Code

Shifting security left and embracing open source relies on another best practice: treating your infrastructure as code. This means deploying code as software or services that can be programmed by APIs.

According to our own State of Application Strategy in 2021 report, 52% of organizations have embraced Infrastructure as Code (IaC) practices. IaC means your infrastructure can be scaled up and down as needed – and it extends beyond just your security and open source tooling. Treating all modern app infrastructure as code pushes responsibility for configuring infrastructure closest to those that know the application best: the developers and DevOps teams. Containers have revolutionized how infrastructure can be deployed and allowed developers and DevOps teams to easily design and invoke the infrastructure best suited to their app deployments.

That’s why African Bank modernized its core banking platform using a microservices architecture and NGINX. By treating infrastructure as code, like the bank you can empower developers to push new features to production faster. Using containers as the foundation, African Bank reduced the time it takes to offer its customers a new banking capability from six months to six weeks.

Automate Through Self-Service

At this point you might be asking “Do I really want to empower my developers that much?” Shifting security left, embracing open source, and deploying infrastructure as code, while allowing developers the leeway to do it all without guardrails – can lead to the operational risks we discussed. That’s why you need self‑service infrastructure, both to make it easier for developers to deploy the infrastructure and services they need and to reduce complexity while controlling cost.

Our State of Application Strategy in 2021 report also revealed that 65% of organizations have adopted automation and orchestration. More specifically, 68% are adopting automation for network and security. Why? So that DevOps, NetOps, SecOps, and Platform Ops teams can make infrastructure easy for developers to provision through self‑service catalogs, enabled by containers that limit approved deployments to “golden images” vetted as safe and effective for production. In fact, this combination of “self‑service” and “golden image” is part of what has made public clouds so popular. They function as self‑service portals for a catalog of developer‑friendly technologies, enabling small teams to build and scale apps that compete with giant enterprises. That said, we know that cloud providers don’t always provide the most secure default configurations. This is why it’s so critical to manage your own “golden images” and set them up to protect your own specific infrastructure.

For example, Modern Hire solved this problem by automating WAF deployments through F5 NGINX Controller. Modern Hire relies on NGINX Plus as a critical part of its cloud architecture for security and traffic processing. This freed them to close down a legacy data center, but when it came to pushing security they still didn’t have a solid automation story. Enter NGINX Controller, which provides a DevOps workflow engine for NGINX App Protect WAF. Now Modern Hire’s DevOps team can provide a self‑service interface to the security team without breaking the value of automation in a DevOps environment. Join the Shifting Left with F5 NGINX discussion forum at Agility 2022 to learn more from Jason McMinn, Infrastructure Architect & Sr. DevOps Engineer at Modern Hire.

Innovations Drive Progress and Productivity

During the last two years, we’ve seen customers that moved quickly to empower developers reap the rewards of their trust and foresight. Like OXXO, many created new sales channels or empowered their developers to completely rearchitect applications and infrastructure for a future of distributed cloud‑ and service‑based deployment frameworks. Those customers are today in a better place and looking to the future. We are pleased we can help them on that path and we hope that our vision and recent product announcements help them even further.

Shifting left, enabling open source, scaling infrastructure as code, creating self‑service architectures, and building adaptive applications on the most modern and resilient infrastructure technologies like Kubernetes all drive digital innovation. Together with my F5 colleagues, I’ll be speaking during the keynote on Day 1 of Agility 2022 (9:00–10:30 a.m. PST, February 15) about how to simplify, secure, and innovate in the digital era.

You can register here for Agility, a two‑day virtual event that includes keynotes, panel discussions, breakout sessions, and demos. And it’s completely free! We look forward to meeting you virtually at Agility, and to partnering with you to empower your developers and push the envelope in the coming year and beyond.

The post Agility 2022: Compelling Digital Innovations Are Now Table Stakes for the Enterprise appeared first on NGINX.

]]>
Microservices March 2022: Kubernetes Networking https://www.nginx.com/blog/microservices-march-2022-kubernetes-networking/ Tue, 01 Feb 2022 23:24:16 +0000 https://www.nginx.com/?p=68695 What Is Microservices March? It’s a month‑long, free educational program designed to up‑level your knowledge of various microservices topics. (If you missed last year’s program, you can catch all the content on demand.) Why Kubernetes Networking for 2022? Getting Kubernetes into production is a top priority for many organizations. But the journey is hard and [...]

Read More...

The post Microservices March 2022: Kubernetes Networking appeared first on NGINX.

]]>

What Is Microservices March?

It’s a month‑long, free educational program designed to up‑level your knowledge of various microservices topics. (If you missed last year’s program, you can catch all the content on demand.)

Why Kubernetes Networking for 2022?

Getting Kubernetes into production is a top priority for many organizations. But the journey is hard and getting value can be even harder. Through conversations with our customers and community, we’ve observed that Kubernetes networking – a key component of production‑grade Kubernetes – is frequently misunderstood or underappreciated. Without a solid Kubernetes networking strategy (and the right talent in place to execute it), the most likely outcomes are downtime, security breaches, and wasted money and effort. As providers of two Kubernetes networking tools – an Ingress controller and service mesh – we’re in a great position to use our knowledge and resources to ease your path to production Kubernetes.

What Is Kubernetes Networking?

Simply put, Kubernetes networking is a framework for connecting your Kubernetes components, services, and traffic – but it’s more than just moving packets from A to B. Kubernetes networking includes layers of unique networking constructs and pieces (from nodes and clusters to Ingress controllers to service meshes) that work together to provide a range of capabilities. Whether you’re just getting started in Kubernetes or are dealing with advanced architectural decisions, understanding Kubernetes networking is crucial to successfully delivering your containerized apps and APIs.

What Will I Learn?

If you’re a complete newbie to Kubernetes networking – don’t worry! This year’s program is a series of events and self‑paced activities designed to take you from Zero to Hero. You can choose to complete the entire program or just the bits and pieces you need to fill in your knowledge gaps. The total time commitment is about 16 hours, spread across 4 weeks.

Four units progressively guide you through the essentials of Kubernetes networking:

  • Unit 1 (March 7–11): Architecting Kubernetes Clusters for High‑Traffic Websites
  • Unit 2 (March 14–18): Exposing APIs in Kubernetes
  • Unit 3 (March 21–25): Microservices Security Pattern
  • Unit 4 (March 28–31): Advanced Kubernetes Deployment Strategies

Each unit includes:

  • A YouTube livestream featuring experts from NGINX and learnk8s
  • A collection of blogs, videos and ebooks to deepen your knowledge
  • A hands‑on, self‑paced lab for experimenting with Kubernetes technologies
  • Access to the NGINX experts via our Slack community

Who Should Participate?

While anyone interested in Kubernetes will learn a lot, we’d especially like to invite current and aspiring “Kubernetes operators” – regardless of Kubernetes skill level – to participate because the curriculum caters to your professional development!

What’s a Kubernetes Operator?

An Kubernetes operator or “cluster operator” (not to be confused with the Kubernetes programmatic operator construct) is analogous to what we would have called a system administrator in the pre‑cloud era.

  • Job titles – We haven’t met anyone with the job title “Kubernetes Operator”…yet. Most often, people in this role have the title of Cloud Architect or Site Reliability Engineer. They’re usually part of a larger operations team, such as a Platform Ops team.
  • Job description – Kubernetes operators are responsible for Kubernetes as a piece of infrastructure and are commonly responsible for helping other teams run their services on Kubernetes. Their jobs can include planning and monitoring capacity, scaling the clusters, and handling larger management tasks that help provide Kubernetes as a platform for network and app teams.

How Do I Join Microservices March 2022?

It’s easy! Sign up for free to get access to the program.

When you register, we’ll collect just enough information to provide you with calendar reminders for the livestreams, weekly learning guides, access to the Slack community, and enrollment for the labs.

We love hearing about what you’re interested in and how we can make your Microservices March experience valuable and fun. If you have questions or suggestions, please feel free to leave them in the comments section below. Stay tuned for more updates and we can’t wait to “see” you in March!

The post Microservices March 2022: Kubernetes Networking appeared first on NGINX.

]]>
Supporting Open Source for a More Secure World: F5 NGINX Announces Sponsorships of Let’s Encrypt and OpenSSL https://www.nginx.com/blog/supporting-open-source-f5-nginx-sponsorships-of-lets-encrypt-openssl/ Mon, 24 Jan 2022 19:52:59 +0000 https://www.nginx.com/?p=68568 Our goals at F5 NGINX include not only building great open source software that enables modern applications and Platforms Ops practices, but also making the technology world more secure. We take great pride in our security efforts, and also recognize that it takes a broader team to secure the technology fabric we all increasingly rely on [...]

Read More...

The post Supporting Open Source for a More Secure World: F5 NGINX Announces Sponsorships of Let’s Encrypt and OpenSSL appeared first on NGINX.

]]>
Our goals at F5 NGINX include not only building great open source software that enables modern applications and Platforms Ops practices, but also making the technology world more secure. We take great pride in our security efforts, and also recognize that it takes a broader team to secure the technology fabric we all increasingly rely on in our daily lives. We are proud to announce our sponsorship of two of the open source projects that are crucial in securing technology across the globe – Let’s Encrypt and OpenSSL.

Let’s Encrypt

Let’s Encrypt has lowered the barrier to security by making it, quite literally, free to obtain and deploy digital certificates. More specifically, the mission of Let’s Encrypt is to create a more secure World Wide Web by promoting the widespread adoption of HTTPS. To do this, Let’s Encrypt runs the world’s largest certificate authority, securing more than 260 million websites and simplifying certificate management by automating many of the tedious manual steps, such as payment, web server config, email validation, and certificate renewal.

It is not an overstatement to say that Let’s Encrypt has been one of the major reasons why the rollout of TLS encryption has been so successful. We like Let’s Encrypt so much that we recommend it to our customers and have even built some recipes to automate the process of using Let’s Encrypt as a certificate authority with NGINX Ingress Controller and NGINX Service Mesh.

OpenSSL

OpenSSL is another unsung hero of global technology security. A small core team builds and maintains “a robust, commercial‑grade, and full‑featured toolkit” for the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols. OpenSSL publishes a free and open source cryptography library under a permissive Apache‑style license, as an alternative to proprietary cryptography libraries which can cost tens or hundreds of thousands of dollars per year. By removing this cost barrier, OpenSSL has enabled millions of organizations to implement stronger security than they could have otherwise.

Join Us in Supporting Open Source Software

NGINX believes that open source is the way and the future of global technology security. We in the open source community are all in this together. Encouraging the broader use of core foundational security benefits us all by reducing the attack surface of the global application infrastructure. The real heroes here, of course, are the project maintainers and contributors. They are selfless and awesome.

We hope that our support makes it easier for Let’s Encrypt and OpenSSL to do their important work. Please join us in supporting them – and other open source projects – in any way you can. If you know of other projects that deserve support, please let us know in the comments section.

The post Supporting Open Source for a More Secure World: F5 NGINX Announces Sponsorships of Let’s Encrypt and OpenSSL appeared first on NGINX.

]]>
F5 and NGINX Together Extend Robust Security Across Your Hybrid Environment https://www.nginx.com/blog/f5-nginx-together-extend-robust-security-across-your-hybrid-environment/ Thu, 20 Jan 2022 17:52:09 +0000 https://www.nginx.com/?p=68610 When one of the world’s most successful premium car makers picks an application security solution, you can be confident they’ve made sure it meets their standards for performance and reliability. That’s why we’re proud that the Audi Group – active in more than 100 markets worldwide – recently chose F5 NGINX App Protect WAF to secure its Kubernetes‑based [...]

Read More...

The post F5 and NGINX Together Extend Robust Security Across Your Hybrid Environment appeared first on NGINX.

]]>

When one of the world’s most successful premium car makers picks an application security solution, you can be confident they’ve made sure it meets their standards for performance and reliability. That’s why we’re proud that the Audi Group – active in more than 100 markets worldwide – recently chose F5 NGINX App Protect WAF to secure its Kubernetes‑based platform for modern application development.

NGINX App Protect is a prime example of how F5 enables customers on their digital transformation journeys by integrating its industry‑leading security expertise into tools for modern apps. In this case, we’ve ported the security engine from F5 Advanced Web Application Firewall (WAF) – tried and tested over decades by our BIG‑IP customers – into NGINX, known as an ideal platform for modern app delivery thanks to its exceptional performance, flexible programmability, and ease of deployment in any environment.

Like many F5 customers, Audi relies on both BIG‑IP and NGINX. By leveraging a common security engine in products with the right form factor for different environments, Audi can be confident that its entire infrastructure is protected from the OWASP Top 10 and other advanced threats. It also means that Audi’s DevOps and SecOps teams can operate in harmony with robust support from F5.

F5 acquired NGINX in 2019 because it recognized the changes it was seeing in the app‑delivery landscape as inexorable. NGINX App Protect is one of the first demonstrations of the synergy that makes F5 and NGINX better together. We look forward to building further on that synergy, strengthening both F5’s security portfolio and its role in the modern application landscape.

How NGINX Helps Make F5 Better

In the mid‑2010s, F5 realized that to continue succeeding in the modern app‑delivery landscape it needed to build out its product portfolio. Today we see those changes accelerating, as evidenced by these trends:

  • Enterprises are adopting Kubernetes for modern app environments
  • Enterprises are adopting DevSecOps, which shifts security left, closer to developers
  • Developers prefer – and often insist on – open source software

As enterprises move to modern app deployments and architectures, the world of application security is also witnessing a shift away from models that treat infrastructure as a shared service. Increasingly, microservices and Kubernetes dominate the modern app landscape, with security tools fully integrated into the delivery process. According to the 2021 Kubernetes Adoption report, 89% of IT professionals expect Kubernetes to expand its role in infrastructure management over the next two to three years as Kubernetes adoption and functionality continue to grow.

BIG‑IP and NGINX provide similar core application‑delivery functionality but are suited to different app development and delivery environments. BIG‑IP’s relatively large footprint isn’t ideal for all application types, especially highly distributed and dynamic ones. Particularly as DevSecOps shifts security left – and developers deploy new and updated software faster than ever – enterprises need a solution with a smaller footprint that integrates easily into DevOps workflows.

F5 provides that solution in the form of NGINX App Protect and other NGINX products. Additionally, NGINX satisfies the craving of today’s modern app developers – and anyone who focuses on building applications rather than managing networks and security – for open source technology. The DevSecOps culture also leans towards open source, and NGINX brought to F5 its large, enthusiastic open source community and modern mindset. Beyond that, NGINX’s modern modular architecture makes it easy to incorporate F5 security technology in the form of modules.

With its open source roots, NGINX has put a community‑forward mindset front and center in its app development and microservices architectures. Now NGINX is helping influence F5 to extend its more traditional culture and embrace open source as part of product development. As a clear example, at Sprint 2.0, F5 announced its expanded participation in open source projects like the Kubernetes Gateway API SIG and community.

How F5 Helps Make NGINX Better

The F5 Advanced WAF is a perfect fit for security‑focused organizations that wish to self‑manage and tailor granular controls for traditional apps. Its WAF and DoS security engines have long been available to BIG‑IP customers as modules in Advanced WAF, but not in a lightweight form factor suitable for microservices architectures and environments. NGINX customers, on the other hand, had trouble finding a WAF with the rich feature set of Advanced WAF that didn’t drive up latency.

After the NGINX acquisition, F5 made it a top priority to port its trusted application security solutions to NGINX, offering enterprise‑grade security expertise in a high‑performance and lightweight form factor that serves the needs of DevOps and DevSecOps teams building modern applications. NGINX App Protect is the result. Immediately upon its release in 2020, it set new benchmarks for low latency, high performance, and resistance to bypass techniques.

The many benefits from integrating Advanced WAF’s power into NGINX include:

  • Protecting and scaling mission‑critical, advanced front‑end services in your modern application stack
  • Achieving the time-to-market benefits of a microservices architecture without compromising reliability and security controls
  • Providing consistent, robust, and high‑performance application security wherever application traffic moves – whether through BIG‑IP or through microservices architectures enabled by NGINX

NGINX App Protect WAF provides high performance in a small footprint, optimized for microservices architectures, cloud, and containers. NGINX App Protect DoS defends against hard-to-detect Layer 7 attacks.

And how does F5 serve enterprises who want to shift left? By enabling them to inject battle‑tested and superior application security into their CI/CD pipelines, reducing the inherent risks of rapid and frequent releases. The F5 NGINX Controller App Security add‑on for both API management and application delivery enable AppDev and DevOps teams to implement WAF protection in their development pipelines in a self‑service manner while still complying with corporate security requirements. You can also apply consistent policies across all of your BIG‑IP and NGINX deployment environments with the NGINX App Protect Policy Converter.

Improving Governance and Observability with Machine Learning and Portable Policies

Of course, technology never stops evolving, and F5 and NGINX plan to continue innovating.

F5’s “Adaptive Applications” Vision Promises Comprehensive Security

As modern threats become increasingly complex, an app’s ability to adapt to threats and other changes becomes ever more crucial. In an ideal world, app services independently scale based on demand. F5 sees this as entering a new world of “Adaptive Applications” – one where a consistent, declarative API layer enables easy management of applications that learn to take care of themselves and avoid evolving security threats, allowing customers to safely deliver modernized experiences.

Acquisitions like Shape and Threat Stack Enrich F5 with ML and Observability

Further expanding its world‑class portfolio of application security and delivery technology, F5 acquired Shape Security, a leader in online fraud and abuse prevention, in 2020, and Threat Stack, a cloud‑ and container‑native observability solution, in 2021. Incorporating Shape and Threat Stack technology gives F5 an end-to-end application security solution with proactive risk identification and real‑time threat mitigation, plus enhanced visibility across application infrastructures and workloads. Dashboards and monitoring are already in the works, along with projects focusing on machine learning (ML). F5 sees the need for sophisticated, adaptive protection and is dedicated to expanding its offerings in that area.

One WAF Engine Across Platforms Ensures Effective Security Everywhere

Using common WAF technology, F5 customers can maintain their standardized security policies when migrating from traditional environments to containerized and cloud environments, and from the F5 Advanced WAF to NGINX App Protect. Portability across our WAF products ensures continued security and confidence for F5 customers by use of a shared declarative API for WAF policy. Staying close to the application workloads, F5 is committed to enabling WAF capabilities in form factors best able to meet the needs of the application and its architecture.

Get Started with F5 NGINX Today

To stay up to date with F5 NGINX, engage with your trusted technology advisors – whether that be your account team or partner. Environments are constantly being streamlined for better management, and it’s easier than ever to stay plugged‑in and subscribe, especially with our focus on community. Whether you’re shifting left, requiring complex protection, or looking for time-to-market benefits, F5 NGINX’s tested technology, smaller footprint, and high‑performance solutions ensure agile and lightweight security both now and for the future.

Regardless of where you are in your app development journey, you can get started with free 30-day trials of our commercial security solutions:

The post F5 and NGINX Together Extend Robust Security Across Your Hybrid Environment appeared first on NGINX.

]]>
Do Svidaniya, Igor, and Thank You for NGINX https://www.nginx.com/blog/do-svidaniya-igor-thank-you-for-nginx/ Tue, 18 Jan 2022 14:55:30 +0000 https://www.nginx.com/?p=68609 With profound appreciation and gratitude, we announce today that Igor Sysoev – author of NGINX and co‑founder of NGINX, Inc. – has chosen to step back from NGINX and F5 in order to spend more time with his friends and family and to pursue personal projects. Igor began developing NGINX in the spring of 2002. He watched the meteoric [...]

Read More...

The post Do Svidaniya, Igor, and Thank You for NGINX appeared first on NGINX.

]]>
With profound appreciation and gratitude, we announce today that Igor Sysoev – author of NGINX and co‑founder of NGINX, Inc. – has chosen to step back from NGINX and F5 in order to spend more time with his friends and family and to pursue personal projects.

Igor began developing NGINX in the spring of 2002. He watched the meteoric growth of the early Internet and envisioned a better way to handle web traffic, a novel architecture that would allow high‑traffic sites to better handle tens of thousands of concurrent connections and cache rich content such as photos or videos that was slowing down page loads. 

Fast forward 20 years, and the code that Igor created now powers the majority of websites running on the planet – both directly and as the software underlying popular servers like Cloudflare, OpenResty, and Tengine. In fact, one could easily argue that Igor’s vision is a key part of what makes the web what it is today. Igor’s vision and values then shaped the company NGINX, Inc., fostering a commitment to code excellence and transparency powered by open source and community, while simultaneously creating commercial products that customers loved. 

This is not an easy balancing act. That Igor is held in such high esteem by community and developers, enterprise customers, and NGINX engineers is a testament to his leadership by example with humility, curiosity, and an insistence on making great software.

A Brief History of Igor and NGINX

Igor came from humble beginnings. The son of a military officer, he was born in a small town in Kazakhstan (then a Soviet republic). His family moved to the capital Almaty when he was a year old. From a young age, Igor was fascinated with computers. He wrote his first lines of code on a Yamaha MSX as a high schooler in the mid‑1980s. Igor later graduated with a degree in computer science from the prestigious Bauman Moscow State Technical University as the early Internet was beginning to take shape. 

Igor started working as a systems administrator but continued to write code on the side. He released his first program in assembly language in 1999, the AV antivirus program which guarded against the 10 most common computer viruses of the time. Igor freely shared the binaries, and the program was widely used in Russia for several years. He started work on what would become NGINX after he realized that the way the original Apache HTTP Server handled connections could not scale to meet the needs of the evolving World Wide Web.

In particular, Igor sought to solve the C10k problem – handling 10,000 concurrent connections on a single server – by building a web server that not only handled massive concurrency but could also serve bandwidth‑hogging elements such as photos or music files more quickly and efficiently. After several companies in Russia and abroad began using NGINX, Igor open sourced the project with a permissive license on October 4, 2004 – the 47th anniversary of the day the USSR launched Sputnik, the world’s first artificial satellite. 

For seven years, Igor was the sole developer of NGINX code. During that period, he wrote hundreds of thousands of lines of code and grew NGINX from a web server and reverse proxy into a true Swiss Army Knife™ for web applications and services, adding key capabilities for load balancing, caching, security, and content acceleration. 

NGINX rapidly gained market share, even though Igor spent zero time evangelizing the project and documentation was limited. Even with a missing manual, NGINX worked and word spread. More and more developers and sysadmins used it to solve their problems and accelerate their websites. Igor didn’t need praise or promotion. His code spoke for itself. 

NGINX Goes Commercial But Stays True to Open Source

In 2011, Igor formed the company NGINX, Inc. with co‑founders Maxim Konovalov and Andrew Alexeev, in order to accelerate development velocity. Even though Igor understood that now he and his team needed to figure out ways to make money, they made a vow to maintain the integrity of the open source version of NGINX and to keep its permissive license. He has been true to his word. Under Igor’s direction, NGINX has consistently improved its open source product in more than 140 releases since the company’s founding. Today NGINX software powers hundreds of millions of websites

On the road raising venture capital for NGINX, Inc. – (from right) Igor, CEO Gus Robertson, co‑founders Andrew Alexeev and Maxim Konovalov

In 2011, the idea of adding functionality to a commercial version in the form of proprietary modules was novel thinking; today, numerous open source startups follow this path. When that commercial version, NGINX Plus, launched in 2013, it was warmly received. Four years later, NGINX had over 1,000 paying customers and tens of millions in revenues, even as NGINX Open Source and the NGINX community continued to grow and prosper. By the end of 2019, NGINX was powering more than 475 million websites and in 2021, NGINX became the most widely used web server in the world

Always looking to the future, Igor has overseen the rapid development of other popular NGINX projects, including NGINX JavaScript (njs) and NGINX Unit. He also architected a new implementation of the sendfile(2) system call which was incorporated into the open source FreeBSD operating system. And as the ranks of NGINX engineers have grown and the company has joined F5, Igor has remained a steady behind-the-scenes presence, providing vision and guidance that has kept NGINX on the right path. 

Carrying On Igor’s Legacy of Excellence

Today, our paths diverge with Igor stepping back for a well‑deserved break. Fortunately, his ethos and the culture he created are not going anywhere. In great companies, products, and projects, the DNA of the founder is persistent and immutable. Our approach to products, community, transparency, open source, and innovation have all been shaped by Igor and will continue with Maxim and the NGINX leadership team. 

The ultimate legacy of Igor’s time with NGINX and F5 is, of course, the code itself. Igor wrote much of the code that is still in use today. The test of time will be whether we can continue to write code as timeless and create products as useful and widely respected as Igor has. It’s a high bar, but Igor has left us in a good place to live up to these aspirations. Igor – thank you so much for all the years working with us and we wish you the very best in your next chapter. 

The post Do Svidaniya, Igor, and Thank You for NGINX appeared first on NGINX.

]]>
What’s New with the NGINX Controller Application Delivery Module for 2022 https://www.nginx.com/blog/whats-new-nginx-controller-application-delivery-module-2022/ Wed, 12 Jan 2022 20:36:58 +0000 https://www.nginx.com/?p=68565 Said another way, a series of small, incremental changes, delivered often, can have a very large impact. This thinking is the basis for modern apps that are commonly developed using CI/CD pipelines. While daily integration of new code into the mainline might not be a silver bullet, the accumulation of many small submissions can result [...]

Read More...

The post What’s New with the NGINX Controller Application Delivery Module for 2022 appeared first on NGINX.

]]>
“The man who moves a mountain begins by carrying away small stones.
– Confucius

Said another way, a series of small, incremental changes, delivered often, can have a very large impact. This thinking is the basis for modern apps that are commonly developed using CI/CD pipelines. While daily integration of new code into the mainline might not be a silver bullet, the accumulation of many small submissions can result in the next killer application.

Like those who aspire to move a mountain, over the past few months NGINX released new versions of the NGINX Controller Application Delivery Module (ADM) that combine dramatically to improve the product – already a powerful governance, observability, and simplified operations platform for NGINX Plus deployments and the applications they support.

Specifically, Releases 3.20, 3.21, and 3.22 of the ADM together offer both significant new features and enhanced functionality, much of it the result of your feedback. In this blog we take a look at highlights in each release that help you keep your apps available, secure, and performing optimally.

New Features and Enhancements in Release 3.22

Released on December 20, 2021, Release 3.22 includes these new features and enhancements:

  • Snippets – A core mission of NGINX Controller is simplifying workflows and aligning to an app‑centric model for observability, governance, and operations. By design, implementing this model comes with tradeoffs in the form of a more “opinionated” view of configuration and slight limitations on how much you can customize your NGINX deployment – especially when compared with direct configuration and management of NGINX Plus instances. But we understand that sometimes you really need to tailor configurations for specific use cases.

    With snippets, you can now insert custom NGINX configuration that isn’t natively supported by the Controller API into the main, http, stream, server, location, and upstream contexts in an NGINX configuration. For best practices and examples, see About Snippets in the Controller documentation.

  • Workload health‑check events – A primary use case for NGINX Controller is app‑centric visibility and insight, which helps you ensure your apps stay healthy and available. Release 3.22 enhances this functionality with two additional workload health‑check events generated per component per instance:

    • A triggered event that signifies changes in state of workload group members from “Healthy” to “Unhealthy”
    • An event that provides a snapshot of the current state of workload group members, sent every few minutes
  • Workload health‑check probe programmability – You can configure the headers in health‑check probes sent by the NGINX Plus data plane to the workload or upstream servers hosting applications.

  • Caching – One of the key differentiators of NGINX Plus is its ability to cache both static and dynamic HTTP content from proxied web and application servers. Caching improves app performance by reducing both the load on the servers and the latency of responses sent to clients, ultimately driving better digital experiences for customers.

    In Release 3.22 you can configure caching via the API or UI, and dive into performance metrics and dashboards for cached content. You can also use the new snippet functionality described above for the advanced caching configurations that NGINX supports, such as different cache locations based on content type. For more information, see About Caching in the Controller documentation.

  • Worker process tuning – You can tune NGINX Plus worker processes to better leverage the capabilities of the underlying machine, using the Controller API to set the following directives: multi_accept, worker_connections, worker_priority, worker_processes, and worker_rlimit_nofile.

  • Instance groups – You can now create a logical group of NGINX Plus instances which then receive identical configuration. This enables at‑scale configuration of multiple instances in a single step.

  • Additional enhancements

    • Support for enabling proxying to upstream servers with NTLM authentication.
    • UI enhancements for configuring rate limiting and JWT authentication for ADC web components.
    • Support for OpenID Connect (OIDC) authentication with Azure AD as the Identity Provider.
    • Support for SELinux – You can now run both Controller and Controller Agents on Linux machines where SELinux is enabled.
    • Support for NGINX App Protect WAF 3.7.
    • Technology preview of Red Hat Enterprise Linux (RHEL) 8 – You can run both Controller and Controller Agents on RHEL 8 as a proof of concept. We have tested this functionality in small‑scale deployments only. Performance and stability issues are possible, so we strongly recommend experimenting with scaling in a test environment before deploying to production.

For more details, see the Release Notes.

New Features and Enhancements in Release 3.21

Released on October 27, 2021, Release 3.21 includes these new features and enhancements:

  • Initial support for snippets as an experimental feature. Customer feedback enabled us to tune the feature for the GA delivery in Release 3.22 as described above.

  • Initial support for instance groups as described above.

  • Support for NGINX Plus R19 through R25.

  • Support for NGINX App Protect WAF 3.6 and earlier.

For details, see the Release Notes.

New Features and Enhancements in Release 3.20

Released on September 14, 2021, Release 3.20 introduced greater scale, better stability, and a big leap forward in overall product quality, making possible many of the innovations in Releases 3.21 and 3.22. Features and enhancements include:

  • Introduction of Data Plane Manager (DPM) – This internal enhancement increases the overall scalability and resiliency of NGINX Controller as a whole. With DPM, you can now holistically manage significantly more NGINX Plus instances and application services from a single pane of glass and rest assured that your Controller deployments remain available (the degree of scale varies by deployment, depending on configuration).

  • Data Explorer – You can more easily double‑click into the vast stream of data and metrics produced by the NGINX Plus instances managed by Controller. Data Explorer provides powerful, actionable insights from metrics such as the amount of data generated by HTTP POST requests for a particular app this week compared to last week, or the average CPU utilization trends for an environment. Through better filtering, data dimensions, and the ability to overlay events and time scales on top of raw NGINX Plus data, you can create your own customized view of NGINX Plus data as well as generate alerts to stay in the know.

  • Additional enhancements

    • A high‑performance communication path between NGINX Controller and the Controller Agent
    • Support for NGINX App Protect WAF 3.3 through 3.5
    • Support for NGINX Plus R19 through R24

For details, see the Release Notes.

Keep Your Feedback Coming

The NGINX Controller Application Delivery Module (and the Controller platform in general) continues to evolve. Together, Releases 3.20 through 3.22 improve the platform, further simplify and streamline administration and management tasks, make extracting meaningful application insight easier, and help harden security postures. Many of these new features and enhancements are the direct result of conversations we’ve had and feedback we’ve received from you, our customers. So please keep it coming by engaging with your F5 representative.

If you haven’t had a chance to give NGINX Controller a try, now is a great time! Start a free 30-day trial of NGINX Controller today or contact us to discuss your use cases.

The post What’s New with the NGINX Controller Application Delivery Module for 2022 appeared first on NGINX.

]]>