something about the lulz

Building a simple WAF with Cloudflare's worker service

cloudflare logo

Note: this post is crappy and mostly written on crappy notes I left myself when I wrote this a few months ago. Feedback or questions can be sent to me via twitter or commenting the Github gist.

A WAF in Javascript?

That sounds ridiculous, doesn’t it? Cloudflare has a lot of really cool products, but my favorite that they have released so far is their worker service. Using this service we can access bits of Cloudflare’s APIs to handle requests however we like. This is abstracted using Javascript. Your code runs in an ephemeral environment on a Cloudflare server with the JS8 engine, and is only executed when you receive requests to a site with workers enabled. Javascript is not my language of choice, so I’m sure you will find my code to be hideous. I hope you find this research and example useful.

You can learn more about the things workers can do in the introduction blog post here. Some of the cool things Cloudflare’s worker service can allow you to do:

  • add logic to each request that allows you to determine which requests are cached
  • implement your own quality of service controls without needing to have it handled on your backend service
  • access user input from the HTTP request
  • respond to requests without connecting to the backend service
  • block malicious bots or spammers without the load hitting your backend service

All of these features indicate to me that we could build a simple, yet effective Web App Firewall system with the worker service.

Enabling Cloudflare workers

First you will need to login to the Cloudflare dashboard, and select the domain you wish to add workers on. You can learn more about pricing and your options on Cloudflare’s worker product page. The most basic plan is only $5 a month for 10M requests, then it’s $0.50 per request thereafter.

Cloudflare worker editor


This editor is awesome. Here is where you will add the code for how your worker will function once you enable it in the Cloudflare dashboard. It gives us access to a dev console, an HTTP request tester, and a built-in documentation panel that makes it easy to find answers when you have questions. Overall the editor provides a comfortable environment for testing worker code and requests.

Wat it do?

  • Inspect POST/GET parameters for evil input
  • Send events & payloads to logging service (Loggly in this test)


One of the issues I had with the worker was event logging. This made troubleshooting the WAF signatures I was writing a pain in the ass. I consulted the worker service documentation and discovered that I could send POST requests from the worker each time I had a request come in. That sounds like it could potentially cause some overhead. But, this is for science so we’ll worry about that when we get there. If we wanted we could also integrate something like the Cymon open threat intel API for manipulating the score of requests.

You can send custom event logs from our worker application to Loggly via an HTTP endpoint that they provide. Find the docs for their HTTP endpoint here.

Things to do

  • Inspect cookie, user-agent, and other headers for bad things! Currently we’re only looking at POST and GET parameters for evil.
  • Make the risk scoring make sense


You can comment on this via the Github gist.

*  Web Application Firewall built with Cloudflare workers
*  Author:  < >
*  License: GPLv3 < >
*  Cloudflare worker documentation:
*  < >
*  Event logging is with Loggly
*  < >

  Start of variable config

  - Each request starts with a risk score of 0
  - Any request with a risk score greater than safe_score will be dropped

var score = 0;
var safe_score = 50;

// Set this to 1 if you are using static hosting like S3 that can't process POST requests.
// Set to 0 if your backend will handle POST requests
var no_post = 0;

// loggly HTTP/S Event Endpoint to send logs to
var LOGGLY_ENDPOINT = 'changeme'

// error handling
function handle_error(err){

// event logging
function log_violation(msg){

function high_risk_event(input){
  // things that go here should always have higher weight because it's definitely
  // considered bad.
  var bad_input = [
      score += 100;
      log_violation('detected '+sig+' in the user-agent header');

// Process user-agent for malicious things
function process_user_agent(ua){
  // process user-agent with our list of regular expression signatures
  var bad_agent_regexp = [
    'spider', // arachnophobia was the best movie of all time
    "'{1}", // start of some sqli sigs
    var regexp = new RegExp(sig);
      score += 100;
      log_violation('detected '+sig+' in the user-agent header');

// Process URL
function process_url(url){

  var bad_url_sigs = [
    var regexp = new RegExp(sig);
      score += 100;
      log_violation('detected '+sig+' in the url');

// Process POST input before sending to the backend
function process_post(postData){

  // start of regexp sigs
  var bad_post_sigs = [
  bad_post_sigs.forEach(function(sig) {
    var regexp = new RegExp(sig);
      score += 100;
      log_violation('detected '+sig+' in POST data');

// start the CF worker event listener
addEventListener('fetch', event => {

async function fetchAndApply(request) {
  // We catch the exception and set ua to 0 if there
  // is not user-agent header in the request
  try {
    // start user-agent analysis
    var ua = request.headers.get('user-agent').toLowerCase();
    process_user_agent(ua, score);
  } catch(err) {
    var ua = 0;

  // start URL analysis
  var url = request.url.toLowerCase();
  process_url(decodeURIComponent(url), score);

  // inspect POST requests for bad things
  if(request.method == 'POST'){
    if(no_post == 1){
      return new Response('Method not allowed', {status: 405, statusText: 'denied'});
    } else {
      let body = await request.text()
      let formData = new URLSearchParams(body)

      // we log all POST data to loggly (todo: change this to be json data that is sent to loggly)
      let headers = {'Content-Type': 'content-type:text/plain' }
      const init = { method: 'POST', headers: headers, body: '{ "event": "post_request", "score": ' + score + ', "payload": "' + decodeURIComponent(body) + '", "url": "' + decodeURIComponent(request.url) + '" }' }
      const response = await fetch(LOGGLY_ENDPOINT, init);
        // check request threat score
      if(score > safe_score){
        // return 403 page if POST check does not pass the process_post function
        let headers = {'Content-Type': 'content-type:text/plain' }
        const init = { method: 'POST', headers: headers, body: '{ "event": "firewall", "score": ' + score + ', "payload": "' + decodeURIComponent(body) + '", "url": "' + decodeURIComponent(request.url) + '" }' }
        const response = await fetch(LOGGLY_ENDPOINT, init);
        return new Response('(╯°□°)╯︵ ┻━┻', {status: 403, statusText: 'Forbidden'});
      } else {
        // return request to backend with POST params since they are not bad
        let newRequest = new Request(request, { body })
        return fetch(newRequest);
  } else {
    // proceed with GET request scoring
    if(score > safe_score){
      let headers = {'Content-Type': 'content-type:text/plain' }
      const init = { method: 'POST', headers: headers, body: '{ "event": "firewall", "score": ' + score + ', "url": "' + decodeURIComponent(request.url) + '" }' }
      const response = await fetch(LOGGLY_ENDPOINT, init);
      return new Response('(╯°□°)╯︵ ┻━┻', {status: 403, statusText: 'Forbidden'});
    } else {
      return fetch(request);