\n \n
When I first saw that Jest was running sl
so many times, my first thought was to ask my colleague if sl
is a valid command on his Mac, and of course it is not. After all, which serious engineer would stuff their machine full of silly commands like sl
, gti
, cowsay
, or toilet
? The next thing I tried was to rename sl
to something else, and sure enough all my problems disappeared: yarn test
started working perfectly.
\n
So what does Jest have to do with Steam Locomotives?
\n \n \n \n
\n
Nothing, that’s what. The whole affair is an unfortunate naming clash between sl
the Steam Locomotive and sl
the Sapling CLI. Jest wanted sl
the source control system, but ended up getting steam-rolled by sl
the Steam Locomotive.
\n
Fortunately the devs took it in good humor, and made a (still unreleased) fix. Check out the train memes!
\n
\n
At this point the main story has ended. However, there are still some unresolved nagging questions, like…
\n
How did the crash arrive at the magic number of a relatively even 27 seconds?
\n \n \n \n
\n
I don’t know. Actually I’m not sure if a forked child executing sl
still has a terminal anymore, but the travel time of the train does depend on the terminal width. The wider it is, the longer it takes:
\n
🌈 ~ tput cols\n425\n🌈 ~ time sl\nsl 0.19s user 0.06s system 1% cpu 20.629 total\n🌈 ~ tput cols\n58\n🌈 ~ time sl \nsl 0.03s user 0.01s system 0% cpu 5.695 total
\n
So the first thing I tried was to run yarn test in a ridiculously narrow terminal and see what happens:
\n
Determin\ning test\n suites \nto run..\n. \n \n ● Test\n suite f\nailed to\n run \n \nthrown: \n[Error] \n \nerror Co\nmmand fa\niled wit\nh exit c\node 1. \ninfo Vis\nit https\n://yarnp\nkg.com/e\nn/docs/c\nli/run f\nor docum\nentation\n about t\nhis comm\nand. \nyarn tes\nt 1.92s\n user 0.\n67s syst\nem 9% cp\nu 27.088\n total \n🌈 back\nstage [m\naster] t\nput cols\n \n8
\n
Alas, the terminal width doesn’t affect jest at all. Jest calls sl via execa
so let’s mock that up locally:
\n
🌈 choochoo cat runSl.mjs \nimport {execa} from 'execa';\nconst { stdout } = await execa('tput', ['cols']);\nconsole.log('terminal colwidth:', stdout);\nawait execa('sl', ['root']);\n🌈 choochoo time node runSl.mjs\nterminal colwidth: 80\nnode runSl.mjs 0.21s user 0.06s system 4% cpu 6.730 total
\n
So execa
uses the default terminal width of 80, which takes the train 6.7 seconds to cross. And 27 seconds divided by 6.7 is awfully close to 4. So is Jest running sl
4 times? Let’s do a poor man’s bpftrace by hooking into sl
like so:
\n
#!/bin/bash\n\nuniqid=$RANDOM\necho "$(date --utc +"%Y-%m-%d %H:%M:%S.%N") $uniqid started" >> /home/yew/executed.log\n/usr/games/sl.actual "$@"\necho "$(date --utc +"%Y-%m-%d %H:%M:%S.%N") $uniqid ended" >> /home/yew/executed.log
\n
And if we check executed.log
, sl
is indeed executed in 4 waves, albeit by 5 workers simultaneously in each wave:
\n
#wave1\n2025-03-20 13:23:57.125482563 21049 started\n2025-03-20 13:23:57.127526987 21666 started\n2025-03-20 13:23:57.131099388 4897 started\n2025-03-20 13:23:57.134237754 102 started\n2025-03-20 13:23:57.137091737 15733 started\n#wave1 ends, wave2 starts\n2025-03-20 13:24:03.704588580 21666 ended\n2025-03-20 13:24:03.704621737 21049 ended\n2025-03-20 13:24:03.707780748 4897 ended\n2025-03-20 13:24:03.712086346 15733 ended\n2025-03-20 13:24:03.711953000 102 ended\n2025-03-20 13:24:03.714831149 18018 started\n2025-03-20 13:24:03.721293279 23293 started\n2025-03-20 13:24:03.724600164 27918 started\n2025-03-20 13:24:03.729763900 15091 started\n2025-03-20 13:24:03.733176122 18473 started\n#wave2 ends, wave3 starts\n2025-03-20 13:24:10.294286746 18018 ended\n2025-03-20 13:24:10.297261754 23293 ended\n2025-03-20 13:24:10.300925031 27918 ended\n2025-03-20 13:24:10.300950334 15091 ended\n2025-03-20 13:24:10.303498710 24873 started\n2025-03-20 13:24:10.303980494 18473 ended\n2025-03-20 13:24:10.308560194 31825 started\n2025-03-20 13:24:10.310595182 18452 started\n2025-03-20 13:24:10.314222848 16121 started\n2025-03-20 13:24:10.317875812 30892 started\n#wave3 ends, wave4 starts\n2025-03-20 13:24:16.883609316 24873 ended\n2025-03-20 13:24:16.886708598 18452 ended\n2025-03-20 13:24:16.886867725 31825 ended\n2025-03-20 13:24:16.890735338 16121 ended\n2025-03-20 13:24:16.893661911 21975 started\n2025-03-20 13:24:16.898525968 30892 ended\n#crash imminent! wave4 ending, wave5 starting...\n2025-03-20 13:24:23.474925807 21975 ended
\n
The logs were emitted for about 26.35 seconds, which is close to 27. It probably crashed just as wave4 was reporting back. And each wave lasts about 6.7 seconds, right on the money with manual measurement.
\n
So why is Jest running sl in 4 waves? Why did it crash at the start of the 5th wave?
\n \n \n \n
\n
Let’s again modify the poor man’s bpftrace to also log the args and working directory:
\n
echo "$(date --utc +"%Y-%m-%d %H:%M:%S.%N") $uniqid started: $@ at $PWD" >> /home/yew/executed.log
\n
From the results we can see that the 5 workers are busy executing sl root
, which corresponds to the getRoot()
function in jest-change-files/sl.ts
\n
2025-03-21 05:50:22.663263304 started: root at /home/yew/cloudflare/repos/backstage/packages/app/src\n2025-03-21 05:50:22.665550470 started: root at /home/yew/cloudflare/repos/backstage/packages/backend/src\n2025-03-21 05:50:22.667988509 started: root at /home/yew/cloudflare/repos/backstage/plugins/access/src\n2025-03-21 05:50:22.671781519 started: root at /home/yew/cloudflare/repos/backstage/plugins/backstage-components/src\n2025-03-21 05:50:22.673690514 started: root at /home/yew/cloudflare/repos/backstage/plugins/backstage-entities/src\n2025-03-21 05:50:29.247573899 started: root at /home/yew/cloudflare/repos/backstage/plugins/catalog-types-common/src\n2025-03-21 05:50:29.251173536 started: root at /home/yew/cloudflare/repos/backstage/plugins/cross-connects/src\n2025-03-21 05:50:29.255263605 started: root at /home/yew/cloudflare/repos/backstage/plugins/cross-connects-backend/src\n2025-03-21 05:50:29.257293780 started: root at /home/yew/cloudflare/repos/backstage/plugins/pingboard-backend/src\n2025-03-21 05:50:29.260285783 started: root at /home/yew/cloudflare/repos/backstage/plugins/resource-insights/src\n2025-03-21 05:50:35.823374079 started: root at /home/yew/cloudflare/repos/backstage/plugins/scaffolder-backend-module-gaia/src\n2025-03-21 05:50:35.825418386 started: root at /home/yew/cloudflare/repos/backstage/plugins/scaffolder-backend-module-r2/src\n2025-03-21 05:50:35.829963172 started: root at /home/yew/cloudflare/repos/backstage/plugins/security-scorecard-dash/src\n2025-03-21 05:50:35.832597778 started: root at /home/yew/cloudflare/repos/backstage/plugins/slo-directory/src\n2025-03-21 05:50:35.834631869 started: root at /home/yew/cloudflare/repos/backstage/plugins/software-excellence-dashboard/src\n2025-03-21 05:50:42.404063080 started: root at /home/yew/cloudflare/repos/backstage/plugins/teamcity/src
\n
The 16 entries here correspond neatly to the 16 rootDirs
configured in Jest for Cloudflare’s backstage. We have 5 trains, and we want to visit 16 stations so let’s do some simple math. 16/5.0 = 3.2 which means our trains need to go back and forth 4 times at a minimum to cover them all.
\n
Final mystery: Why did it crash?
\n \n \n \n
\n
Let’s go back to the very start of our journey. The original [Error]
thrown was actually from here and after modifying node_modules/jest-changed-files/index.js
, I found that the error is shortMessage: 'Command failed with ENAMETOOLONG: sl status...
‘ and the reason why became clear when I interrogated Jest about what it thinks the repos are.
While the git repo is what you’d expect, the sl “repo” looks amazingly like a train wreck in motion:
\n
got repos.git as Set(1) { '/home/yew/cloudflare/repos/backstage' }\ngot repos.sl as Set(1) {\n '\\x1B[?1049h\\x1B[1;24r\\x1B[m\\x1B(B\\x1B[4l\\x1B[?7h\\x1B[?25l\\x1B[H\\x1B[2J\\x1B[15;80H_\\x1B[15;79H_\\x1B[16d|\\x1B[9;80H_\\x1B[12;80H|\\x1B[13;80H|\\x1B[14;80H|\\x1B[15;78H__/\\x1B[16;79H|/\\x1B[17;80H\\\\\\x1B[9;\n 79H_D\\x1B[10;80H|\\x1B[11;80H/\\x1B[12;79H|\\x1B[K\\x1B[13d\\b|\\x1B[K\\x1B[14d\\b|/\\x1B[15;1H\\x1B[1P\\x1B[16;78H|/-\\x1B[17;79H\\\\_\\x1B[9;1H\\x1B[1P\\x1B[10;79H|(\\x1B[11;79H/\\x1B[K\\x1B[12d\\b\\b|\\x1B[K\\x1B[13d\\b|\n _\\x1B[14;1H\\x1B[1P\\x1B[15;76H__/ =\\x1B[16;77H|/-=\\x1B[17;78H\\\\_/\\x1B[9;77H_D _\\x1B[10;78H|(_\\x1B[11;78H/\\x1B[K\\x1B[12d\\b\\b|\\x1B[K\\x1B[13d\\b| _\\x1B[14;77H"https://blog.cloudflare.com/"\\x1B[15;75H__/\n =|\\x1B[16;76H|/-=|\\x1B[17;1H\\x1B[1P\\x1B[8;80H=\\x1B[9;76H_D _|\\x1B[10;77H|(_)\\x1B[11;77H/\\x1B[K\\x1B[12d\\b\\b|\\x1B[K\\x1B[13d\\b|\n _\\r\\x1B[14d\\x1B[1P\\x1B[15d\\x1B[1P\\x1B[16;75H|/-=|_\\x1B[17;1H\\x1B[1P\\x1B[8;79H=\\r\\x1B[9d\\x1B[1P\\x1B[10;76H|(_)-\\x1B[11;76H/\\x1B[K\\x1B[12d\\b\\b|\\x1B[K\\x1B[13d\\b| _\\r\\x1B[14d\\x1B[1P\\x1B[15;73H__/ =|\n o\\x1B[16;74H|/-=|_\\r\\x1B[17d\\x1B[1P\\x1B[8;78H=\\r\\x1B[9d\\x1B[1P\\x1B[10;75H|(_)-\\x1B[11;75H/\\x1B[K\\x1B[12d\\b\\b|\\x1B[K\\x1B[13d\\b|\n _\\r\\x1B[14d\\x1B[1P\\x1B[15d\\x1B[1P\\x1B[16;73H|/-=|_\\r\\x1B[17d\\x1B[1P\\x1B[8;77H=\\x1B[9;73H_D _| |\\x1B[10;74H|(_)-\\x1B[11;74H/ |\\x1B[12;73H| |\\x1B[13;73H| _\\x1B[14;73H"https://blog.cloudflare.com/" |\\x1B[15;71H__/\n =| o |\\x1B[16;72H|/-=|___|\\x1B[17;1H\\x1B[1P\\x 1B[5;79H(@\\x1B[7;77H(\\r\\x1B[8d\\x1B[1P\\x1B[9;72H_D _| |_\\x1B[10;1H\\x1B[1P\\x1B[11d\\x1B[1P\\x1B[12d\\x1B[1P\\x1B[13;72H| _\\x1B[14;72H"https://blog.cloudflare.com/" |-\\x1B[15;70H__/\n =| o |=\\x1B[16;71H|/-=|___|=\\x1B[17;1H\\x1B[1P\\x1B[8d\\x1B[1P\\x1B[9;71H_D _| |_\\r\\x1B[10d\\x1B[1P\\x1B[11d\\x1B[1P\\x1B[12d\\x1B[1P\\x1B[13;71H| _\\x1B[14;71H"https://blog.cloudflare.com/" |-\\x1B[15;69H__/ =| o\n |=-\\x1B[16;70H|/-=|___|=O\\x1B[17;71H\\\\_/ \\\\\\x1B[8;1H\\x1B[1P\\x1B[9;70H_D _| |_\\x1B[10;71H|(_)--- |\\x1B[11;71H/ | |\\x1B[12;70H| | |\\x1B[13;70H| _\\x1B[80G|\\x1B[14;70H"https://blog.cloudflare.com/"\n |-\\x1B[15;68H__/ =| o |=-~\\x1B[16;69H|/-=|___|=\\x1B[K\\x1B[17;70H\\\\_/ \\\\O\\x1B[8;1H\\x1B[1P\\x1B[9;69H_D _| |_\\r\\x1B[10d\\x1B[1P\\x1B[11d\\x1B[1P\\x1B[12d\\x1B[1P\\x1B[13;69H| _\\x1B[79G|_\\x1B[14;69H"https://blog.cloudflare.com/"\n |-\\x1B[15;67H__/ =| o |=-~\\r\\x1B[16d\\x1B[1P\\x1B[17;69H\\\\_/ \\\\_\\x1B[4d\\b\\b(@@\\x1B[5;75H( )\\x1B[7;73H(@@@)\\r\\x1B[8d\\x1B[1P\\x1B[9;68H_D _|\n |_\\r\\x1B[10d\\x1B[1P\\x1B[11d\\x1B[1P\\x1B[12d\\x1B[1P\\x1B[13;68H| _\\x1B[78G|_\\x1B[14;68H"https://blog.cloudflare.com/" |-\\x1B[15;66H__/ =| o |=-~~\\\\\\x1B[16;67H|/-=|___|= O\\x1B[17;68H\\\\_/ \\\\__/\\x1B[8;1H\\x1B[1P\\x1B[9;67H_D _|\n |_\\r\\x1B[10d\\x1B[1P\\x1B[11d\\x1B[1P\\x1B[12d\\x1B[1P\\x1B[13;67H| _\\x1B[77G|_\\x1B[14;67H"https://blog.cloudflare.com/" |-\\x1B[15;65H__/ =| o |=-~O==\\x1B[16;66H|/-=|___|= |\\x1B[17;1H\\x1B[1P\\x1B[8d\\x1B[1P\\x1B[9;66H_D _|\n |_\\x1B[10;67H|(_)--- | H\\x1B[11;67H/ | | H\\x1B[12;66H| | | H\\x1B[13;66H| _\\x1B[76G|___H\\x1B[14;66H"https://blog.cloudflare.com/" |-\\x1B[15;64H__/ =| o |=-O==\\x1B[16;65H|/-=|___|=\n |\\r\\x1B[17d\\x1B[1P\\x1B[8d\\x1B[1P\\x1B[9;65H_D _| |_\\x1B[80G/\\x1B[10;66H|(_)--- | H\\\\\\x1B[11;1H\\x1B[1P\\x1B[12d\\x1B[1P\\x1B[13;65H| _\\x1B[75G|___H_\\x1B[14;65H"https://blog.cloudflare.com/" |-\\x1B[15;63H__/ =| o |=-~~\\\\\n /\\x1B[16;64H|/-=|___|=O=====O\\x1B[17;65H\\\\_/ \\\\__/ \\\\\\x1B[1;4r\\x1B[4;1H\\n' + '\\x1B[1;24r\\x1B[4;74H( )\\x1B[5;71H(@@@@)\\x1B[K\\x1B[7;69H( )\\x1B[K\\x1B[8;68H====\n \\x1B[80G_\\x1B[9;1H\\x1B[1P\\x1B[10;65H|(_)--- | H\\\\_\\x1B[11;1H\\x1B[1P\\x1B[12d\\x1B[1P\\x1B[13;64H| _\\x1B[74G|___H_\\x1B[14;64H"https://blog.cloudflare.com/" |-\\x1B[15;62H__/ =| o |=-~~\\\\ /~\\x1B[16;63H|/-=|___|=\n ||\\x1B[K\\x1B[17;64H\\\\_/ \\\\O=====O\\x1B[8;67H==== \\x1B[79G_\\r\\x1B[9d\\x1B[1P\\x1B[10;64H|(_)--- | H\\\\_\\x1B[11;64H/ | | H |\\x1B[12;63H| | | H |\\x1B[13;63H|\n _\\x1B[73G|___H__/\\x1B[14;63H"https://blog.cloudflare.com/" |-\\x1B[15;61H__/ =| o |=-~~\\\\ /~\\r\\x1B[16d\\x1B[1P\\x1B[17;63H\\\\_/ \\\\_\\x1B[8;66H==== \\x1B[78G_\\r\\x1B[9d\\x1B[1P\\x1B[10;63H|(_)--- |\n H\\\\_\\r\\x1B[11d\\x1B[1P\\x1B[12;62H| | | H |_\\x1B[13;62H| _\\x1B[72G|___H__/_\\x1B[14;62H"https://blog.cloudflare.com/" |-\\x1B[15;60H__/ =| o |=-~~\\\\ /~~\\\\\\x1B[16;61H|/-=|___|= O=====O\\x1B[17;62H\\\\_/ \\\\__/\n \\\\__/\\x1B[8;65H==== \\x1B[77G_\\r\\x1B[9d\\x1B[1P\\x1B[10;62H|(_)--- | H\\\\_\\r\\x1B[11d\\x1B[1P\\x1B[12;61H| | | H |_\\x1B[13;61H| _\\x1B[71G|___H__/_\\x1B[14;61H"https://blog.cloudflare.com/" |-\\x1B[80GI\\x1B[15;59H__/ =|\n o |=-~O=====O==\\x1B[16;60H|/-=|___|= || |\\x1B[17;1H\\x1B[1P\\x1B[2;79H(@\\x1B[3;74H( )\\x1B[K\\x1B[4;70H(@@@@)\\x1B[K\\x1B[5;67H( )\\x1B[K\\x1B[7;65H(@@@)\\x1B[K\\x1B[8;64H====\n \\x1B[76G_\\r\\x1B[9d\\x1B[1P\\x1B[10;61H|(_)--- | H\\\\_\\x1B[11;61H/ | | H | |\\x1B[12;60H| | | H |__-\\x1B[13;60H| _\\x1B[70G|___H__/__|\\x1B[14;60H"https://blog.cloudflare.com/" |-\\x1B[79GI_\\x1B[15;58H__/ =| o\n |=-O=====O==\\x1B[16;59H|/-=|___|= || |\\r\\x1B[17d\\x1B[1P\\x1B[8;63H==== \\x1B[75G_\\r\\x1B[9d\\x1B[1P\\x1B[10;60H|(_)--- | H\\\\_\\r\\x1B[11d\\x1B[1P\\x1B[12;59H| | | H |__-\\x1B[13;59H|\n _\\x1B[69G|___H__/__|_\\x1B[14;59H"https://blog.cloudflare.com/" |-\\x1B[78GI_\\x1B[15;57H__/ =| o |=-~~\\\\ /~~\\\\ /\\x1B[16;58H|/-=|___|=O=====O=====O\\x1B[17;59H\\\\_/ \\\\__/ \\\\__/ \\\\\\x1B[8;62H====\n \\x1B[74G_\\r\\x1B[9d\\x1B[1P\\x1B[10;59H|(_)--- | H\\\\_\\r\\x1B | | H |__-\\x1B[13;58H| _\\x1B[68G|___H__/__|_\\x1B[14;58H"https://blog.cloudflare.com/" |-\\x1B[77GI_\\x1B[15;56H__/ =| o |=-~~\\\\ /~~\\\\ /~\\x1B[16;57H|/-=|___|=\n || ||\\x1B[K\\x1B[17;58H\\\\_/ \\\\O=====O=====O\\x1B[8;61H==== \\x1B[73G_\\r\\x1B[9d\\x1B[1P\\x1B[10;58H|(_)--- _\\x1B[67G|___H__/__|_\\x1B[14;57H"https://blog.cloudflare.com/" |-\\x1B[76GI_\\x1B[15;55H__/ =| o |=-~~\\\\ /~~\\\\\n /~\\r\\x1B[16d\\x1B[1P\\x1B[17;57H\\\\_/ \\\\_\\x1B[2;75H( ) (\\x1B[3;70H(@@@)\\x1B[K\\x1B[4;66H()\\x1B[K\\x1B[5;63H(@@@@)\\x1B[
\n \n
Acknowledgements
\n \n \n \n
\n
Thank you to my colleagues Mengnan Gong and Shuhao Zhang, whose ideas and perspectives helped narrow down the root causes of this mystery.
If you enjoy troubleshooting weird and tricky production issues, our engineering teams are hiring.
“],”published_at”:[0,”2025-04-02T14:00+01:00″],”updated_at”:[0,”2025-04-02T13:00:03.425Z”],”feature_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1yA1TgNlIUZwtZ4bIL39EJ/0bfc765dae00213eca7e48a337c8178e/image1.png”],”tags”:[1,[[0,{“id”:[0,”2UVIYusJwlvsmPYl2AvSuR”],”name”:[0,”Deep Dive”],”slug”:[0,”deep-dive”]}],[0,{“id”:[0,”383iv0UQ6Lp0GZwOAxGq2p”],”name”:[0,”Linux”],”slug”:[0,”linux”]}],[0,{“id”:[0,”3JAY3z7p7An94s6ScuSQPf”],”name”:[0,”Developer Platform”],”slug”:[0,”developer-platform”]}],[0,{“id”:[0,”4HIPcb68qM0e26fIxyfzwQ”],”name”:[0,”Developers”],”slug”:[0,”developers”]}]]],”relatedTags”:[0],”authors”:[1,[[0,{“name”:[0,”Yew Leong”],”slug”:[0,”yew-leong”],”bio”:[0],”profile_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/658l52gIu4kyDjwnJCelUt/d0a7c86def68692d50d9b4a0d6fc2f18/_tmp_mini_magick20221116-43-2dcplr.jpg”],”location”:[0],”website”:[0],”twitter”:[0],”facebook”:[0]}]]],”meta_description”:[0,”Yarn tests fail consistently at the 27-second mark. The usual suspects are swiftly eliminated to no avail. A deep dive is taken to comb through traces, only to be derailed into an unexpected crash investigation.”],”primary_author”:[0,{}],”localeList”:[0,{“name”:[0,”blog-english-only”],”enUS”:[0,”English for Locale”],”zhCN”:[0,”No Page for Locale”],”zhHansCN”:[0,”No Page for Locale”],”zhTW”:[0,”No Page for Locale”],”frFR”:[0,”No Page for Locale”],”deDE”:[0,”No Page for Locale”],”itIT”:[0,”No Page for Locale”],”jaJP”:[0,”No Page for Locale”],”koKR”:[0,”No Page for Locale”],”ptBR”:[0,”No Page for Locale”],”esLA”:[0,”No Page for Locale”],”esES”:[0,”No Page for Locale”],”enAU”:[0,”No Page for Locale”],”enCA”:[0,”No Page for Locale”],”enIN”:[0,”No Page for Locale”],”enGB”:[0,”No Page for Locale”],”idID”:[0,”No Page for Locale”],”ruRU”:[0,”No Page for Locale”],”svSE”:[0,”No Page for Locale”],”viVN”:[0,”No Page for Locale”],”plPL”:[0,”No Page for Locale”],”arAR”:[0,”No Page for Locale”],”nlNL”:[0,”No Page for Locale”],”thTH”:[0,”No Page for Locale”],”trTR”:[0,”No Page for Locale”],”heIL”:[0,”No Page for Locale”],”lvLV”:[0,”No Page for Locale”],”etEE”:[0,”No Page for Locale”],”ltLT”:[0,”No Page for Locale”]}],”url”:[0,”https://blog.cloudflare.com/yarn-test-suffers-strange-derailment”],”metadata”:[0,{“title”:[0,”A steam locomotive from 1993 broke my yarn test”],”description”:[0,”Yarn tests fail consistently at the 27-second mark. The usual suspects are swiftly eliminated to no avail. A deep dive is taken to comb through traces, only to be derailed into an unexpected crash investigation.”],”imgPreview”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4y36mkMs8GlYflk6MMmLnh/42840cf34748ac9b6619c5d47704db10/A_steam_locomotive_from_1993_broke_my_yarn_test-OG.png”]}]}],[0,{“id”:[0,”4e3J8mxEIN24iNKfw9ToEH”],”title”:[0,”Build and deploy Remote Model Context Protocol (MCP) servers to Cloudflare”],”slug”:[0,”remote-model-context-protocol-servers-mcp”],”excerpt”:[0,”You can now build and deploy remote MCP servers to Cloudflare, and we handle the hard parts of building remote MCP servers for you.”],”featured”:[0,false],”html”:[0,”
It feels like almost everyone building AI applications and agents is talking about the Model Context Protocol (MCP), as well as building MCP servers that you install and run locally on your own computer.
You can now build and deploy remote MCP servers to Cloudflare. We’ve added four things to Cloudflare that handle the hard parts of building remote MCP servers for you:
-
workers-oauth-provider — an OAuth Provider that makes authorization easy
-
McpAgent — a class built into the Cloudflare Agents SDK that handles remote transport
-
mcp-remote — an adapter that lets MCP clients that otherwise only support local connections work with remote MCP servers
-
AI playground as a remote MCP client — a chat interface that allows you to connect to remote MCP servers, with the authentication check included
The button below, or the developer docs, will get you up and running in production with this example MCP server in less than two minutes:
Unlike the local MCP servers you may have previously used, remote MCP servers are accessible on the Internet. People simply sign in and grant permissions to MCP clients using familiar authorization flows. We think this is going to be a massive deal — connecting coding agents to MCP servers has blown developers’ minds over the past few months, and remote MCP servers have the same potential to open up similar new ways of working with LLMs and agents to a much wider audience, including more everyday consumer use cases.
\n
From local to remote — bringing MCP to the masses
\n \n \n \n
\n
MCP is quickly becoming the common protocol that enables LLMs to go beyond inference and RAG, and take actions that require access beyond the AI application itself (like sending an email, deploying a code change, publishing blog posts, you name it). It enables AI agents (MCP clients) to access tools and resources from external services (MCP servers).
To date, MCP has been limited to running locally on your own machine — if you want to access a tool on the web using MCP, it’s up to you to set up the server locally. You haven’t been able to use MCP from web-based interfaces or mobile apps, and there hasn’t been a way to let people authenticate and grant the MCP client permission. Effectively, MCP servers haven’t yet been brought online.
\n
Support for remote MCP connections changes this. It creates the opportunity to reach a wider audience of Internet users who aren’t going to install and run MCP servers locally for use with desktop apps. Remote MCP support is like the transition from desktop software to web-based software. People expect to continue tasks across devices and to login and have things just work. Local MCP is great for developers, but remote MCP connections are the missing piece to reach everyone on the Internet.
\n
\n
Making authentication and authorization just work with MCP
\n \n \n \n
\n
Beyond just changing the transport layer — from stdio to streamable HTTP — when you build a remote MCP server that uses information from the end user’s account, you need authentication and authorization. You need a way to allow users to login and prove who they are (authentication) and a way for users to control what the AI agent will be able to access when using a service (authorization).
MCP does this with OAuth, which has been the standard protocol that allows users to grant applications to access their information or services, without sharing passwords. Here, the MCP Server itself acts as the OAuth Provider. However, OAuth with MCP is hard to implement yourself, so when you build MCP servers on Cloudflare we provide it for you.
\n
workers-oauth-provider — an OAuth 2.1 Provider library for Cloudflare Workers
\n \n \n \n
\n
When you deploy an MCP Server to Cloudflare, your Worker acts as an OAuth Provider, using workers-oauth-provider, a new TypeScript library that wraps your Worker’s code, adding authorization to API endpoints, including (but not limited to) MCP server API endpoints.
Your MCP server will receive the already-authenticated user details as a parameter. You don’t need to perform any checks of your own, or directly manage tokens. You can still fully control how you authenticate users: from what UI they see when they log in, to which provider they use to log in. You can choose to bring your own third-party authentication and authorization providers like Google or GitHub, or integrate with your own.
The complete MCP OAuth flow looks like this:
\n
Here, your MCP server acts as both an OAuth client to your upstream service, and as an OAuth server (also referred to as an OAuth “provider”) to MCP clients. You can use any upstream authentication flow you want, but workers-oauth-provider guarantees that your MCP server is spec-compliant and able to work with the full range of client apps & websites. This includes support for Dynamic Client Registration (RFC 7591) and Authorization Server Metadata (RFC 8414).
\n
A simple, pluggable interface for OAuth
\n \n \n \n
\n
When you build an MCP server with Cloudflare Workers, you provide an instance of the OAuth Provider paths to your authorization, token, and client registration endpoints, along with handlers for your MCP Server, and for auth:
\n
import OAuthProvider from "@cloudflare/workers-oauth-provider";\nimport MyMCPServer from "./my-mcp-server";\nimport MyAuthHandler from "./auth-handler";\n\nexport default new OAuthProvider({\n apiRoute: "/sse", // MCP clients connect to your server at this route\n apiHandler: MyMCPServer.mount('/sse'), // Your MCP Server implmentation\n defaultHandler: MyAuthHandler, // Your authentication implementation\n authorizeEndpoint: "/authorize",\n tokenEndpoint: "/token",\n clientRegistrationEndpoint: "/register",\n});
\n
This abstraction lets you easily plug in your own authentication. Take a look at this example that uses GitHub as the identity provider for an MCP server, in less than 100 lines of code, by implementing /callback and /authorize routes.
\n
Why do MCP servers issue their own tokens?
\n \n \n \n
\n
You may have noticed in the authorization diagram above, and in the authorization section of the MCP spec, that the MCP server issues its own token to the MCP client.
Instead of passing the token it receives from the upstream provider directly to the MCP client, your Worker stores an encrypted access token in Workers KV. It then issues its own token to the client. As shown in the GitHub example above, this is handled on your behalf by the workers-oauth-provider — your code never directly handles writing this token, preventing mistakes. You can see this in the following code snippet from the GitHub example above:
\n
// When you call completeAuthorization, the accessToken you pass to it\n // is encrypted and stored, and never exposed to the MCP client\n // A new, separate token is generated and provided to the client at the /token endpoint\n const { redirectTo } = await c.env.OAUTH_PROVIDER.completeAuthorization({\n request: oauthReqInfo,\n userId: login,\n metadata: { label: name },\n scope: oauthReqInfo.scope,\n props: {\n accessToken, // Stored encrypted, never sent to MCP client\n },\n })\n\n return Response.redirect(redirectTo)
\n
On the surface, this indirection might sound more complicated. Why does it work this way?
By issuing its own token, MCP Servers can restrict access and enforce more granular controls than the upstream provider. If a token you issue to an MCP client is compromised, the attacker only gets the limited permissions you’ve explicitly granted through your MCP tools, not the full access of the original token.
Let’s say your MCP server requests that the user authorize permission to read emails from their Gmail account, using the gmail.readonly scope. The tool that the MCP server exposes is more narrow, and allows reading travel booking notifications from a limited set of senders, to handle a question like “What’s the check-out time for my hotel room tomorrow?” You can enforce this constraint in your MCP server, and if the token you issue to the MCP client is compromised, because the token is to your MCP server — and not the raw token to the upstream provider (Google) — an attacker cannot use it to read arbitrary emails. They can only call the tools your MCP server provides. OWASP calls out “Excessive Agency” as one of the top risk factors for building AI applications, and by issuing its own token to the client and enforcing constraints, your MCP server can limit tools access to only what the client needs.
Or building off the earlier GitHub example, you can enforce that only a specific user is allowed to access a particular tool. In the example below, only users that are part of an allowlist can see or call the generateImage tool, that uses Workers AI to generate an image based on a prompt:
\n
import { McpAgent } from "agents/mcp";\nimport { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";\nimport { z } from "zod";\n\nconst USER_ALLOWLIST = ["geelen"];\n\nexport class MyMCP extends McpAgent<Props, Env> {\n server = new McpServer({\n name: "Github OAuth Proxy Demo",\n version: "1.0.0",\n });\n\n async init() {\n // Dynamically add tools based on the user's identity\n if (USER_ALLOWLIST.has(this.props.login)) {\n this.server.tool(\n 'generateImage',\n 'Generate an image using the flux-1-schnell model.',\n {\n prompt: z.string().describe('A text description of the image you want to generate.')\n },\n async ({ prompt }) => {\n const response = await this.env.AI.run('@cf/black-forest-labs/flux-1-schnell', { \n prompt, \n steps: 8 \n })\n return {\n content: [{ type: 'image', data: response.image!, mimeType: 'image/jpeg' }],\n }\n }\n )\n }\n }\n}\n
\n \n
Introducing McpAgent: remote transport support that works today, and will work with the revision to the MCP spec
\n \n \n \n
\n
The next step to opening up MCP beyond your local machine is to open up a remote transport layer for communication. MCP servers you run on your local machine just communicate over stdio, but for an MCP server to be callable over the Internet, it must implement remote transport.
The McpAgent class we introduced today as part of our Agents SDK handles this for you, using Durable Objects behind the scenes to hold a persistent connection open, so that the MCP client can send server-sent events (SSE) to your MCP server. You don’t have to write code to deal with transport or serialization yourself. A minimal MCP server in 15 lines of code can look like this:
\n
import { McpAgent } from "agents/mcp";\nimport { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";\nimport { z } from "zod";\n\nexport class MyMCP extends McpAgent {\n server = new McpServer({\n name: "Demo",\n version: "1.0.0",\n });\n async init() {\n this.server.tool("add", { a: z.number(), b: z.number() }, async ({ a, b }) => ({\n content: [{ type: "text", text: String(a + b) }],\n }));\n }\n}
\n
After much discussion, remote transport in the MCP spec is changing, with Streamable HTTP replacing HTTP+SSE This allows for stateless, pure HTTP connections to MCP servers, with an option to upgrade to SSE, and removes the need for the MCP client to send messages to a separate endpoint than the one it first connects to. The McpAgent class will change with it and just work with streamable HTTP, so that you don’t have to start over to support the revision to how transport works.
This applies to future iterations of transport as well. Today, the vast majority of MCP servers only expose tools, which are simple remote procedure call (RPC) methods that can be provided by a stateless transport. But more complex human-in-the-loop and agent-to-agent interactions will need prompts and sampling. We expect these types of chatty, two-way interactions will need to be real-time, which will be challenging to do well without a bidirectional transport layer. When that time comes, Cloudflare, the Agents SDK, and Durable Objects all natively support WebSockets, which enable full-duplex, bidirectional real-time communication.
\n
Stateful, agentic MCP servers
\n \n \n \n
\n
When you build MCP servers on Cloudflare, each MCP client session is backed by a Durable Object, via the Agents SDK. This means each session can manage and persist its own state, backed by its own SQL database.
This opens the door to building stateful MCP servers. Rather than just acting as a stateless layer between a client app and an external API, MCP servers on Cloudflare can themselves be stateful applications — games, a shopping cart plus checkout flow, a persistent knowledge graph, or anything else you can dream up. When you build on Cloudflare, MCP servers can be much more than a layer in front of your REST API.
To understand the basics of how this works, let’s look at a minimal example that increments a counter:
\n
import { McpAgent } from "agents/mcp";\nimport { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";\nimport { z } from "zod";\n\ntype State = { counter: number }\n\nexport class MyMCP extends McpAgent<Env, State, {}> {\n server = new McpServer({\n name: "Demo",\n version: "1.0.0",\n });\n\n initialState: State = {\n counter: 1,\n }\n\n async init() {\n this.server.resource(`counter`, `mcp://resource/counter`, (uri) => {\n return {\n contents: [{ uri: uri.href, text: String(this.state.counter) }],\n }\n })\n\n this.server.tool('add', 'Add to the counter, stored in the MCP', { a: z.number() }, async ({ a }) => {\n this.setState({ ...this.state, counter: this.state.counter + a })\n\n return {\n content: [{ type: 'text', text: String(`Added ${a}, total is now ${this.state.counter}`) }],\n }\n })\n }\n\n onStateUpdate(state: State) {\n console.log({ stateUpdate: state })\n }\n\n}
\n
For a given session, the MCP server above will remember the state of the counter across tool calls.
From within an MCP server, you can use Cloudflare’s whole developer platform, and have your MCP server spin up its own web browser, trigger a Workflow, call AI models, and more. We’re excited to see the MCP ecosystem evolve into more advanced use cases.
\n
Connect to remote MCP servers from MCP clients that today only support local MCP
\n \n \n \n
\n
Cloudflare is supporting remote MCP early — before the most prominent MCP client applications support remote, authenticated MCP, and before other platforms support remote MCP. We’re doing this to give you a head start building for where MCP is headed.
But if you build a remote MCP server today, this presents a challenge — how can people start using your MCP server if there aren’t MCP clients that support remote MCP?
We have two new tools that allow you to test your remote MCP server and simulate how users will interact with it in the future:
We updated the Workers AI Playground to be a fully remote MCP client that allows you to connect to any remote MCP server with built-in authentication support. This online chat interface lets you immediately test your remote MCP servers without having to install anything on your device. Instead, just enter the remote MCP server’s URL (e.g. https://remote-server.example.com/sse) and click Connect.
\n
Once you click Connect, you’ll go through the authentication flow (if you set one up) and after, you will be able to interact with the MCP server tools directly from the chat interface.
If you prefer to use a client like Claude Desktop or Cursor that already supports MCP but doesn’t yet handle remote connections with authentication, you can use mcp-remote. mcp-remote is an adapter that lets MCP clients that otherwise only support local connections to work with remote MCP servers. This gives you and your users the ability to preview what interactions with your remote MCP server will be like from the tools you’re already using today, without having to wait for the client to support remote MCP natively.
We’ve published a guide on how to use mcp-remote with popular MCP clients including Claude Desktop, Cursor, and Windsurf. In Claude Desktop, you add the following to your configuration file:
\n
{\n "mcpServers": {\n "remote-example": {\n "command": "npx",\n "args": [\n "mcp-remote",\n "https://remote-server.example.com/sse"\n ]\n }\n }\n}
\n \n \n
Remote Model Context Protocol (MCP) is coming! When client apps support remote MCP servers, the audience of people who can use them opens up from just us, developers, to the rest of the population — who may never even know what MCP is or stands for.
Building a remote MCP server is the way to bring your service into the AI assistants and tools that millions of people use. We’re excited to see many of the biggest companies on the Internet are busy building MCP servers right now, and we are curious about the businesses that pop-up in an agent-first, MCP-native way.
On Cloudflare, you can start building today. We’re ready for you, and ready to help build with you. Email us at 1800-mcp@cloudflare.com, and we’ll help get you going. There’s lots more to come with MCP, and we’re excited to see what you build.
“],”published_at”:[0,”2025-03-25T13:59+00:00″],”updated_at”:[0,”2025-03-25T15:11:42.693Z”],”feature_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ifiJyB00Saj3K0TtU5QWn/7c552a4795603414457c7c33c4f432a2/image2.png”],”tags”:[1,[[0,{“id”:[0,”6Foe3R8of95cWVnQwe5Toi”],”name”:[0,”AI”],”slug”:[0,”ai”]}],[0,{“id”:[0,”4HIPcb68qM0e26fIxyfzwQ”],”name”:[0,”Developers”],”slug”:[0,”developers”]}],[0,{“id”:[0,”6Lfy7VaNvl5G8gOYMKFiux”],”name”:[0,”MCP”],”slug”:[0,”mcp”]}],[0,{“id”:[0,”22RkiaggH3NV4u6qyMmC42″],”name”:[0,”Agents”],”slug”:[0,”agents”]}]]],”relatedTags”:[0],”authors”:[1,[[0,{“name”:[0,”Brendan Irvine-Broque”],”slug”:[0,”brendan-irvine-broque”],”bio”:[0,”Product Manager, Cloudflare Stream”],”profile_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/lTJBFKfbqthKbJKPvulre/e8bf53afa7caf1dffeeb55a8c6884959/brendan-irvine-broque.JPG”],”location”:[0,”Oakland, CA”],”website”:[0,”https://www.cloudflare.com/products/cloudflare-stream/”],”twitter”:[0,”@irvinebroque”],”facebook”:[0,null]}],[0,{“name”:[0,”Dina Kozlov”],”slug”:[0,”dina”],”bio”:[0,null],”profile_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/bY78cK0burCjZbD6jOgAH/a8479b5ea6dd8fb3acb41227c1a4ad0e/dina.jpg”],”location”:[0,null],”website”:[0,null],”twitter”:[0,”@dinasaur_404″],”facebook”:[0,null]}],[0,{“name”:[0,”Glen Maddern”],”slug”:[0,”glen”],”bio”:[0,null],”profile_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7dtWmquDOA3nc27l0f7RwQ/43791027b587018e9003bf83e28b77df/glen.jpg”],”location”:[0,null],”website”:[0,null],”twitter”:[0,”@glenmaddern”],”facebook”:[0,null]}]]],”meta_description”:[0,”You can now build and deploy remote MCP servers to Cloudflare, and we handle the hard parts of building remote MCP servers for you. Unlike local MCP servers you may have previously used, remote MCP servers are Internet-accessible. People simply sign in and grant permissions to MCP clients using familiar authorization flows.”],”primary_author”:[0,{}],”localeList”:[0,{“name”:[0,”blog-english-only”],”enUS”:[0,”English for Locale”],”zhCN”:[0,”No Page for Locale”],”zhHansCN”:[0,”No Page for Locale”],”zhTW”:[0,”No Page for Locale”],”frFR”:[0,”No Page for Locale”],”deDE”:[0,”No Page for Locale”],”itIT”:[0,”No Page for Locale”],”jaJP”:[0,”No Page for Locale”],”koKR”:[0,”No Page for Locale”],”ptBR”:[0,”No Page for Locale”],”esLA”:[0,”No Page for Locale”],”esES”:[0,”No Page for Locale”],”enAU”:[0,”No Page for Locale”],”enCA”:[0,”No Page for Locale”],”enIN”:[0,”No Page for Locale”],”enGB”:[0,”No Page for Locale”],”idID”:[0,”No Page for Locale”],”ruRU”:[0,”No Page for Locale”],”svSE”:[0,”No Page for Locale”],”viVN”:[0,”No Page for Locale”],”plPL”:[0,”No Page for Locale”],”arAR”:[0,”No Page for Locale”],”nlNL”:[0,”No Page for Locale”],”thTH”:[0,”No Page for Locale”],”trTR”:[0,”No Page for Locale”],”heIL”:[0,”No Page for Locale”],”lvLV”:[0,”No Page for Locale”],”etEE”:[0,”No Page for Locale”],”ltLT”:[0,”No Page for Locale”]}],”url”:[0,”https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp”],”metadata”:[0,{“title”:[0,”Build and deploy Remote Model Context Protocol (MCP) servers to Cloudflare”],”description”:[0,”You can now build and deploy remote MCP servers to Cloudflare, and we handle the hard parts of building remote MCP servers for you. Unlike local MCP servers you may have previously used, remote MCP servers are Internet-accessible. People simply sign in and grant permissions to MCP clients using familiar authorization flows.”],”imgPreview”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SwF8Gu9jc5mjMgbH1r4Qs/1367a66d6d34dfb3c66a16aecad959b9/Build_and_deploy_Remote_Model_Context_Protocol__MCP__servers_to_Cloudflare_-OG.png”]}]}],[0,{“id”:[0,”55t98SAXi3erhs7Wn5dgno”],”title”:[0,”New URLPattern API brings improved pattern matching to Node.js and Cloudflare Workers”],”slug”:[0,”improving-web-standards-urlpattern”],”excerpt”:[0,”Today we’re announcing our latest contribution to Node.js, now available in v23.8.0: URLPattern. “],”featured”:[0,false],”html”:[0,”
Today, we are excited to announce that we have contributed an implementation of the URLPattern API to Node.js, and it is available starting with the v23.8.0 update. We’ve done this by adding our URLPattern implementation to Ada URL, the high-performance URL parser that now powers URL handling in both Node.js and Cloudflare Workers. This marks an important step toward bringing this API to the broader JavaScript ecosystem.
Cloudflare Workers has, from the beginning, embraced a standards-based JavaScript programming model, and Cloudflare was one of the founding companies for what has evolved into ECMA’s 55th Technical Committee, focusing on interoperability between Web-interoperable runtimes like Workers, Node.js, Deno, and others. This contribution highlights and marks our commitment to this ongoing philosophy. Ensuring that all the JavaScript runtimes work consistently and offer at least a minimally consistent set of features is critical to ensuring the ongoing health of the ecosystem as a whole.
URLPattern API contribution is just one example of Cloudflare’s ongoing commitment to the open-source ecosystem. We actively contribute to numerous open-source projects including Node.js, V8, and Ada URL, while also maintaining our own open-source initiatives like workerd and wrangler. By upstreaming improvements to foundational technologies that power the web, we strengthen the entire developer ecosystem while ensuring consistent features across JavaScript runtimes. This collaborative approach reflects our belief that open standards and shared implementations benefit everyone – reducing fragmentation, improving developer experience and creating a better Internet.
\n
What is URLPattern?
\n \n \n \n
\n
URLPattern is a standard published by the WHATWG (Web Hypertext Application Technology Working Group) which provides a pattern-matching system for URLs. This specification is available at urlpattern.spec.whatwg.org. The API provides developers with an easy-to-use, regular expression (regex)-based approach to handling route matching, with built-in support for named parameters, wildcards, and more complex pattern matching that works uniformly across all URL components.
URLPattern is part of the WinterTC Minimum Common API, a soon-to-be standardized subset of web platform APIs designed to ensure interoperability across JavaScript runtimes, particularly for server-side and non-browser environments, and includes other APIs such as URL and URLSearchParams.
Cloudflare Workers has supported URLPattern for a number of years now, reflecting our commitment to enabling developers to use standard APIs across both browsers and server-side JavaScript runtimes. Contributing to Node.js and unifying the URLPattern implementation simplifies the ecosystem by reducing fragmentation, while at the same time improving our own implementation in Cloudflare Workers by making it faster and more specification compliant.
The following example demonstrates how URLPattern is used by creating a pattern that matches URLs with a “/blog/:year/:month/:slug” path structure, then tests if one specific URL string matches this pattern, and extracts the named parameters from a second URL using the exec method.
\n
const pattern = new URLPattern({\n pathname: '/blog/:year/:month/:slug'\n});\n\nif (pattern.test('https://example.com/blog/2025/03/urlpattern-launch')) {\n console.log('Match found!');\n}\n\nconst result = pattern.exec('https://example.com/blog/2025/03/urlpattern-launch');\nconsole.log(result.pathname.groups.year); // "2025"\nconsole.log(result.pathname.groups.month); // "03"\nconsole.log(result.pathname.groups.slug); // "urlpattern-launch"
\n
The URLPattern constructor accepts pattern strings or objects defining patterns for individual URL components. The test()
method returns a boolean indicating if a URL simply matches the pattern. The exec()
method provides detailed match results including captured groups. Behind this simple API, there’s sophisticated machinery working behind the scenes:
-
When a URLPattern is used, it internally breaks down a URL, matching it against eight distinct components: protocol, username, password, hostname, port, pathname, search, and hash. This component-based approach gives the developer control over which parts of a URL to match.
-
Upon creation of the instance, URLPattern parses your input patterns for each component and compiles them internally into eight specialized regular expressions (one for each component type). This compilation step happens just once when you create an URLPattern object, optimizing subsequent matching operations.
-
During a match operation (whether using
test()
orexec()
), these regular expressions are used to determine if the input matches the given properties. Thetest()
method tells you if there’s a match, whileexec()
provides detailed information about what was matched, including any named capture groups from your pattern.
\n
Fixing things along the way
\n \n \n \n
\n
While implementing URLPattern, we discovered some inconsistencies between the specification and the web-platform tests, a cross-browser test suite maintained by all major browsers to test conformance to web standard specifications. For instance, we found that URLs with non-special protocols (opaque-paths) and URLs with invalid characters in hostnames were not correctly defined and processed within the URLPattern specification. We worked actively with the Chromium and the Safari teams to address these issues.
URLPatterns constructed from hostname components that contain newline or tab characters were expected to fail in the corresponding web-platform tests. This was due to an inconsistency with the original URLPattern implementation and the URLPattern specification.
\n
const pattern = new URL({ "hostname": "bad\\nhostname" });\nconst matched = pattern.test({ "hostname": "badhostname" });\n// This now returns true.
\n
We opened several issues to document these inconsistencies and followed up with a pull-request to fix the specification, ensuring that all implementations will eventually converge on the same corrected behavior. This also resulted in fixing several inconsistencies in web-platform tests, particularly around handling certain types of white space (such as newline or tab characters) in hostnames.
\n
Getting started with URLPattern
\n \n \n \n
\n
If you’re interested in using URLPattern today, you can:
-
Use it natively in modern browsers by accessing the global URLPattern class
-
Try it in Cloudflare Workers (which has had URLPattern support for some time, now with improved spec compliance and performance)
-
Try it in Node.js, starting from v23.8.0
-
Try it in NativeScript on iOS and Android, starting from v8.9.0
-
Try it in Deno
Here is a more complex example showing how URLPattern can be used for routing in a Cloudflare Worker — a common use case when building API endpoints or web applications that need to handle different URL paths efficiently and differently. The following example shows a pattern for REST APIs that matches both “/users” and “/users/:userId”
\n
const routes = [\n new URLPattern({ pathname: '/users{/:userId}?' }),\n];\n\nexport default {\n async fetch(request, env, ctx): Promise<Response> {\n const url = new URL(request.url);\n for (const route of routes) {\n const match = route.exec(url);\n if (match) {\n const { userId } = match.pathname.groups;\n if (userId) {\n return new Response(`User ID: ${userId}`);\n }\n return new Response('List of users');\n }\n }\n // No matching route found\n return new Response('Not Found', { status: 404 });\n },\n} satisfies ExportedHandler<Env>;
\n \n
What does the future hold?
\n \n \n \n
\n
The contribution of URLPattern to Ada URL and Node.js is just the beginning. We’re excited about the possibilities this opens up for developers across different JavaScript environments.
In the future, we expect to contribute additional improvements to URLPattern’s performance, enabling more use cases for web application routing. Additionally, efforts to standardize the URLPatternList proposal will help deliver faster matching capabilities for server-side runtimes. We’re excited about these developments and encourage you to try URLPattern in your projects today.\t
Try it and let us know what you think by creating an issue on the workerd repository. Your feedback is invaluable as we work to further enhance URLPattern.
We hope to do our part to build a unified Javascript ecosystem, and encourage others to do the same. This may mean looking for opportunities, such as we have with URLPattern, to share API implementations across backend runtimes. It could mean using or contributing to web-platform-tests if you are working on a server-side runtime or web-standard APIs, or it might mean joining WinterTC to help define web-interoperable standards for server-side JavaScript.
“],”published_at”:[0,”2025-03-24T13:00+00:00″],”updated_at”:[0,”2025-03-24T13:00:02.188Z”],”feature_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4alnOiec0KLTuB3ehnNX7f/a52cbb9ed9a18cc9c4c15d877a28936a/image1.png”],”tags”:[1,[[0,{“id”:[0,”3XzVULQKajbCuWudT6JD0p”],”name”:[0,”Node.js”],”slug”:[0,”node-js”]}],[0,{“id”:[0,”78aSAeMjGNmCuetQ7B4OgU”],”name”:[0,”JavaScript”],”slug”:[0,”javascript”]}],[0,{“id”:[0,”6hbkItfupogJP3aRDAq6v8″],”name”:[0,”Cloudflare Workers”],”slug”:[0,”workers”]}],[0,{“id”:[0,”iiynSxxhE6dlxRhbsXqc4″],”name”:[0,”Standards”],”slug”:[0,”standards”]}]]],”relatedTags”:[0],”authors”:[1,[[0,{“name”:[0,”Yagiz Nizipli”],”slug”:[0,”yagiz-nizipli”],”bio”:[0],”profile_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Nt978ELmW8HwrYwJDdwUz/c463d2ccaa54ed75b9c2e3f0fa0a7385/Yagiz_Nizipli.jpg”],”location”:[0],”website”:[0],”twitter”:[0],”facebook”:[0]}],[0,{“name”:[0,”James M Snell”],”slug”:[0,”jasnell”],”bio”:[0,null],”profile_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5dR6CJtYedvLrkAZ6rxv9I/0db3d5a763a8b0a350ac04ac6410da6b/jasnell.jpg”],”location”:[0,”California”],”website”:[0,”https://bsky.app/profile/jasnell.me”],”twitter”:[0],”facebook”:[0,null]}],[0,{“name”:[0,”Daniel Lemire (Guest author)”],”slug”:[0,”daniel-lemire-guest-author”],”bio”:[0],”profile_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7iDgTkAE7gRzKZ3vB9B8K8/e48298f6600293f80d1609fc02471208/Daniel_Lemire__Guest_author_.jpeg”],”location”:[0],”website”:[0],”twitter”:[0],”facebook”:[0]}]]],”meta_description”:[0,”Today we’re announcing our latest contribution to Node.js, now available in v23.8.0: URLPattern. We’ve added our implementation to Ada URL, bringing this standard pattern-matching API to Node.js to improve how developers match patterns and routes across all JavaScript environments.”],”primary_author”:[0,{}],”localeList”:[0,{“name”:[0,”blog-english-only”],”enUS”:[0,”English for Locale”],”zhCN”:[0,”No Page for Locale”],”zhHansCN”:[0,”No Page for Locale”],”zhTW”:[0,”No Page for Locale”],”frFR”:[0,”No Page for Locale”],”deDE”:[0,”No Page for Locale”],”itIT”:[0,”No Page for Locale”],”jaJP”:[0,”No Page for Locale”],”koKR”:[0,”No Page for Locale”],”ptBR”:[0,”No Page for Locale”],”esLA”:[0,”No Page for Locale”],”esES”:[0,”No Page for Locale”],”enAU”:[0,”No Page for Locale”],”enCA”:[0,”No Page for Locale”],”enIN”:[0,”No Page for Locale”],”enGB”:[0,”No Page for Locale”],”idID”:[0,”No Page for Locale”],”ruRU”:[0,”No Page for Locale”],”svSE”:[0,”No Page for Locale”],”viVN”:[0,”No Page for Locale”],”plPL”:[0,”No Page for Locale”],”arAR”:[0,”No Page for Locale”],”nlNL”:[0,”No Page for Locale”],”thTH”:[0,”No Page for Locale”],”trTR”:[0,”No Page for Locale”],”heIL”:[0,”No Page for Locale”],”lvLV”:[0,”No Page for Locale”],”etEE”:[0,”No Page for Locale”],”ltLT”:[0,”No Page for Locale”]}],”url”:[0,”https://blog.cloudflare.com/improving-web-standards-urlpattern”],”metadata”:[0,{“title”:[0],”description”:[0,”Today we’re announcing our latest contribution to Node.js, now available in v23.8.0: URLPattern. We’ve added our implementation to Ada URL, bringing this standard pattern-matching API to Node.js to improve how developers match patterns and routes across all JavaScript environments.”],”imgPreview”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6bo0v1Q736CDgyXvO8SLfs/30d813354c853ea477e4507f3d54e771/New_URLPattern_API_brings_improved_pattern_matching_to_Node.js_and_Cloudflare_Workers-OG.png”]}]}],[0,{“id”:[0,”7ywSxti5U7fxjKbqmVXpGW”],”title”:[0,”Introducing Cloudy, Cloudflare’s AI agent for simplifying complex configurations”],”slug”:[0,”introducing-ai-agent”],”excerpt”:[0,”Cloudflare’s first AI agent, Cloudy, helps make complicated configurations easy to understand for Cloudflare administrators.”],”featured”:[0,false],”html”:[0,”
It’s a big day here at Cloudflare! Not only is it Security Week, but today marks Cloudflare’s first step into a completely new area of functionality, intended to improve how our users both interact with, and get value from, all of our products.
We’re excited to share a first glance of how we’re embedding AI features into the management of Cloudflare products you know and love. Our first mission? Focus on security and streamline the rule and policy management experience. The goal is to automate away the time-consuming task of manually reviewing and contextualizing Custom Rules in Cloudflare WAF, and Gateway policies in Cloudflare One, so you can instantly understand what each policy does, what gaps they have, and what you need to do to fix them.
\n
Meet Cloudy, Cloudflare’s first AI agent
\n \n \n \n
\n
Our initial step toward a fully AI-enabled product experience is the introduction of Cloudy, the first version of Cloudflare AI agents, assistant-like functionality designed to help users quickly understand and improve their Cloudflare configurations in multiple areas of the product suite. You’ll start to see Cloudy functionality seamlessly embedded into two Cloudflare products across the dashboard, which we’ll talk about below.
And while the name Cloudy may be fun and light-hearted, our goals are more serious: Bring Cloudy and AI-powered functionality to every corner of Cloudflare, and optimize how our users operate and manage their favorite Cloudflare products. Let’s start with two places where Cloudy is now live and available to all customers using the WAF and Gateway products.
\n
WAF Custom Rules
\n \n \n \n
\n
Let’s begin with AI-powered overviews of WAF Custom Rules. For those unfamiliar, Cloudflare’s Web Application Firewall (WAF) helps protect web applications from attacks like SQL injection, cross-site scripting (XSS), and other vulnerabilities.
One specific feature of the WAF is the ability to create WAF Custom Rules. These allow users to tailor security policies to block, challenge, or allow traffic based on specific attributes or security criteria.
However, for customers with dozens or even hundreds of rules deployed across their organization, it can be challenging to maintain a clear understanding of their security posture. Rule configurations evolve over time, often managed by different team members, leading to potential inefficiencies and security gaps. What better problem for Cloudy to solve?
\n
Powered by Workers AI, today we’ll share how Cloudy will help review your WAF Custom Rules and provide a summary of what’s configured across them. Cloudy will also help you identify and solve issues such as:
-
Identifying redundant rules: Identify when multiple rules are performing the same function, or using similar fields, helping you streamline your configuration.
-
Optimising execution order: Spot cases where rules ordering affects functionality, such as when a terminating rule (block/challenge action) prevents subsequent rules from executing.
-
Analysing conflicting rules: Detect when rules counteract each other, such as one rule blocking traffic that another rule is designed to allow or log.
-
Identifying disabled rules: Highlight potentially important security rules that are in a disabled state, helping ensure that critical protections are not accidentally left inactive.
Cloudy won’t just summarize your rules, either. It will analyze the relationships and interactions between rules to provide actionable recommendations. For security teams managing complex sets of Custom Rules, this means less time spent auditing configurations and more confidence in your security coverage.
Available to all users, we’re excited to show how Cloudflare AI Agents can enhance the usability of our products, starting with WAF Custom Rules. But this is just the beginning.
\n
Cloudflare One Firewall policies
\n \n \n \n
\n \n
We’ve also added Cloudy to Cloudflare One, our SASE platform, where enterprises manage the security of their employees and tools from a single dashboard.
In Cloudflare Gateway, our Secure Web Gateway offering, customers can configure policies to manage how employees do their jobs on the Internet. These Gateway policies can block access to malicious sites, prevent data loss violations, and control user access, among other things.
But similar to WAF Custom Rules, Gateway policy configurations can become overcomplicated and bogged down over time, with old, forgotten policies that do who-knows-what. Multiple selectors and operators working in counterintuitive ways. Some blocking traffic, others allowing it. Policies that include several user groups, but carve out specific employees. We’ve even seen policies that block hundreds of URLs in a single step. All to say, managing years of Gateway policies can become overwhelming.
So, why not have Cloudy summarize Gateway policies in a way that makes their purpose clear and concise?
Available to all Cloudflare Gateway users (create a free Cloudflare One account here), Cloudy will now provide a quick summary of any Gateway policy you view. It’s now easier than ever to get a clear understanding of each policy at a glance, allowing admins to spot misconfigurations, redundant controls, or other areas for improvement, and move on with confidence.
\n
Built on Workers AI
\n \n \n \n
\n
At the heart of our new functionality is Cloudflare Workers AI (yes, the same version that everyone uses!) that leverages advanced large language models (LLMs) to process vast amounts of information; in this case, policy and rules data. Traditionally, manually reviewing and contextualizing complex configurations is a daunting task for any security team. With Workers AI, we automate that process, turning raw configuration data into consistent, clear summaries and actionable recommendations.
How it works
Cloudflare Workers AI ingests policy and rule configurations from your Cloudflare setup and combines them with a purpose-built LLM prompt. We leverage the same publicly-available LLM models that we offer our customers, and then further enrich the prompt with some additional data to provide it with context. For this specific task of analyzing and summarizing policy and rule data, we provided the LLM:
-
Policy & rule data: This is the primary data itself, including the current configuration of policies/rules for Cloudy to summarize and provide suggestions against.
-
Documentation on product abilities: We provide the model with additional technical details on the policy/rule configurations that are possible with each product, so that the model knows what kind of recommendations are within its bounds.
-
Enriched datasets: Where WAF Custom Rules or CF1 Gateway policies leverage other ‘lists’ (e.g., a WAF rule referencing multiple countries, a Gateway policy leveraging a specific content category), the list item(s) selected must be first translated from an ID to plain-text wording so that the LLM can interpret which policy/rule values are actually being used.
-
Output instructions: We specify to the model which format we’d like to receive the output in. In this case, we use JSON for easiest handling.
-
Additional clarifications: Lastly, we explicitly instruct the LLM to be sure about its output, valuing that aspect above all else. Doing this helps us ensure that no hallucinations make it to the final output.
By automating the analysis of your WAF Custom Rules and Gateway policies, Cloudflare Workers AI not only saves you time but also enhances security by reducing the risk of human error. You get clear, actionable insights that allow you to streamline your configurations, quickly spot anomalies, and maintain a strong security posture—all without the need for labor-intensive manual reviews.
What’s next for Cloudy
Beta previews of Cloudy are live for all Cloudflare customers today. But this is just the beginning of what we envision for AI-powered functionality across our entire product suite.
Throughout the rest of 2025, we plan to roll out additional AI agent capabilities across other areas of Cloudflare. These new features won’t just help customers manage security more efficiently, but they’ll also provide intelligent recommendations for optimizing performance, streamlining operations, and enhancing overall user experience.
We’re excited to hear your thoughts as you get to meet Cloudy and try out these new AI features – send feedback to us at cloudyfeedback@cloudflare.com, or post your thoughts on X, LinkedIn, or Mastodon tagged with #SecurityWeek! Your feedback will help shape our roadmap for AI enhancement, and bring our users smarter, more efficient tooling that helps everyone get more secure.
\n
\n
Watch on Cloudflare TV
\n \n \n \n
\n
\n \n
“],”published_at”:[0,”2025-03-20T13:10+00:00″],”updated_at”:[0,”2025-03-25T16:20:58.599Z”],”feature_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2xorIrMANoEkjSDTPof0Zz/435411092defa3c7b10fd49035879891/BLOG-2692_1.png”],”tags”:[1,[[0,{“id”:[0,”1Wf1Dpb2AFicG44jpRT29y”],”name”:[0,”Workers AI”],”slug”:[0,”workers-ai”]}],[0,{“id”:[0,”6hbkItfupogJP3aRDAq6v8″],”name”:[0,”Cloudflare Workers”],”slug”:[0,”workers”]}],[0,{“id”:[0,”3JAY3z7p7An94s6ScuSQPf”],”name”:[0,”Developer Platform”],”slug”:[0,”developer-platform”]}],[0,{“id”:[0,”4HIPcb68qM0e26fIxyfzwQ”],”name”:[0,”Developers”],”slug”:[0,”developers”]}],[0,{“id”:[0,”6gMpGK5HugYKaxJbvTMOHp”],”name”:[0,”LLM”],”slug”:[0,”llm”]}],[0,{“id”:[0,”lGCLqAT2SMojMzw5b6aio”],”name”:[0,”WAF”],”slug”:[0,”waf”]}],[0,{“id”:[0,”4Z2oveL0P0AeqGa5lL4Vo1″],”name”:[0,”Cloudflare One”],”slug”:[0,”cloudflare-one”]}],[0,{“id”:[0,”J61Eszqn98amrYHq4IhTx”],”name”:[0,”Zero Trust”],”slug”:[0,”zero-trust”]}],[0,{“id”:[0,”3QNaVNNpUXrfZYUGDJkXwA”],”name”:[0,”Cloudflare Zero Trust”],”slug”:[0,”cloudflare-zero-trust”]}],[0,{“id”:[0,”2UI24t7uddD0CIIUJCu1f4″],”name”:[0,”SASE”],”slug”:[0,”sase”]}],[0,{“id”:[0,”7ETpt9DkW8WB415TgyD3Zi”],”name”:[0,”Secure Web Gateway”],”slug”:[0,”secure-web-gateway”]}],[0,{“id”:[0,”5p4Ywa16kAdgLidZ0XHvHa”],”name”:[0,”Beta”],”slug”:[0,”beta”]}],[0,{“id”:[0,”2s3r2BdfPas9oiGbGRXdmQ”],”name”:[0,”Network Services”],”slug”:[0,”network-services”]}]]],”relatedTags”:[0],”authors”:[1,[[0,{“name”:[0,”Alex Dunbrack”],”slug”:[0,”alex-dunbrack”],”bio”:[0,”Product manager @Cloudflare, previously co-founder @Vectrix, alum @Y Combinator”],”profile_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73rgMyGhcPKcLk84gVa7pR/5597006a4e659bc31ff6862749681bb8/alex-dunbrack.jpeg”],”location”:[0,”San Francisco”],”website”:[0,”https://www.linkedin.com/in/alexdunbrack”],”twitter”:[0,null],”facebook”:[0,null]}],[0,{“name”:[0,”Harsh Saxena”],”slug”:[0,”harsh-saxena”],”bio”:[0],”profile_image”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1eEVnuAy0vFvqrA4DuCOyI/56dcd6d40b152672d6a5a3bedae2f623/Harsh_S.jpg”],”location”:[0],”website”:[0],”twitter”:[0],”facebook”:[0]}]]],”meta_description”:[0,”Cloudflare’s first AI agent, Cloudy, helps make complicated configurations easy to understand for Cloudflare administrators.\n”],”primary_author”:[0,{}],”localeList”:[0,{“name”:[0,”LOC: Introducing Cloudy, Cloudflare’s AI agent for simplifying complex configurations”],”enUS”:[0,”English for Locale”],”zhCN”:[0,”Translated for Locale”],”zhHansCN”:[0,”No Page for Locale”],”zhTW”:[0,”Translated for Locale”],”frFR”:[0,”Translated for Locale”],”deDE”:[0,”Translated for Locale”],”itIT”:[0,”No Page for Locale”],”jaJP”:[0,”Translated for Locale”],”koKR”:[0,”Translated for Locale”],”ptBR”:[0,”No Page for Locale”],”esLA”:[0,”No Page for Locale”],”esES”:[0,”Translated for Locale”],”enAU”:[0,”No Page for Locale”],”enCA”:[0,”No Page for Locale”],”enIN”:[0,”No Page for Locale”],”enGB”:[0,”No Page for Locale”],”idID”:[0,”No Page for Locale”],”ruRU”:[0,”No Page for Locale”],”svSE”:[0,”No Page for Locale”],”viVN”:[0,”No Page for Locale”],”plPL”:[0,”No Page for Locale”],”arAR”:[0,”No Page for Locale”],”nlNL”:[0,”No Page for Locale”],”thTH”:[0,”No Page for Locale”],”trTR”:[0,”No Page for Locale”],”heIL”:[0,”No Page for Locale”],”lvLV”:[0,”No Page for Locale”],”etEE”:[0,”No Page for Locale”],”ltLT”:[0,”No Page for Locale”]}],”url”:[0,”https://blog.cloudflare.com/introducing-ai-agent”],”metadata”:[0,{“title”:[0],”description”:[0,”Cloudflare’s first AI agent, Cloudy, helps make complicated configurations easy to understand for Cloudflare administrators.\n”],”imgPreview”:[0,”https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3saCJUKec4xaLaDCrx1JmK/dea7827a94c3bcac0eff67f303a55937/BLOG-2692_OG.png”]}]}]]],”locale”:[0,”en-us”],”translations”:[0,{“posts.by”:[0,”By”],”footer.gdpr”:[0,”GDPR”],”lang_blurb1″:[0,”This post is also available in {lang1}.”],”lang_blurb2″:[0,”This post is also available in {lang1} and {lang2}.”],”lang_blurb3″:[0,”This post is also available in {lang1}, {lang2} and {lang3}.”],”footer.press”:[0,”Press”],”header.title”:[0,”The Cloudflare Blog”],”search.clear”:[0,”Clear”],”search.filter”:[0,”Filter”],”search.source”:[0,”Source”],”footer.careers”:[0,”Careers”],”footer.company”:[0,”Company”],”footer.support”:[0,”Support”],”footer.the_net”:[0,”theNet”],”search.filters”:[0,”Filters”],”footer.our_team”:[0,”Our team”],”footer.webinars”:[0,”Webinars”],”page.more_posts”:[0,”More posts”],”posts.time_read”:[0,”{time} min read”],”search.language”:[0,”Language”],”footer.community”:[0,”Community”],”footer.resources”:[0,”Resources”],”footer.solutions”:[0,”Solutions”],”footer.trademark”:[0,”Trademark”],”header.subscribe”:[0,”Subscribe”],”footer.compliance”:[0,”Compliance”],”footer.free_plans”:[0,”Free plans”],”footer.impact_ESG”:[0,”Impact/ESG”],”posts.follow_on_X”:[0,”Follow on X”],”footer.help_center”:[0,”Help center”],”footer.network_map”:[0,”Network Map”],”header.please_wait”:[0,”Please Wait”],”page.related_posts”:[0,”Related posts”],”search.result_stat”:[0,”Results {search_range} of {search_total} for {search_keyword}“],”footer.case_studies”:[0,”Case Studies”],”footer.connect_2024″:[0,”Connect 2024″],”footer.terms_of_use”:[0,”Terms of Use”],”footer.white_papers”:[0,”White Papers”],”footer.cloudflare_tv”:[0,”Cloudflare TV”],”footer.community_hub”:[0,”Community Hub”],”footer.compare_plans”:[0,”Compare plans”],”footer.contact_sales”:[0,”Contact Sales”],”header.contact_sales”:[0,”Contact Sales”],”header.email_address”:[0,”Email Address”],”page.error.not_found”:[0,”Page not found”],”footer.developer_docs”:[0,”Developer docs”],”footer.privacy_policy”:[0,”Privacy Policy”],”footer.request_a_demo”:[0,”Request a demo”],”page.continue_reading”:[0,”Continue reading”],”footer.analysts_report”:[0,”Analyst reports”],”footer.for_enterprises”:[0,”For enterprises”],”footer.getting_started”:[0,”Getting Started”],”footer.learning_center”:[0,”Learning Center”],”footer.project_galileo”:[0,”Project Galileo”],”pagination.newer_posts”:[0,”Newer Posts”],”pagination.older_posts”:[0,”Older Posts”],”posts.social_buttons.x”:[0,”Discuss on X”],”search.icon_aria_label”:[0,”Search”],”search.source_location”:[0,”Source/Location”],”footer.about_cloudflare”:[0,”About Cloudflare”],”footer.athenian_project”:[0,”Athenian Project”],”footer.become_a_partner”:[0,”Become a partner”],”footer.cloudflare_radar”:[0,”Cloudflare Radar”],”footer.network_services”:[0,”Network services”],”footer.trust_and_safety”:[0,”Trust & Safety”],”header.get_started_free”:[0,”Get Started Free”],”page.search.placeholder”:[0,”Search Cloudflare”],”footer.cloudflare_status”:[0,”Cloudflare Status”],”footer.cookie_preference”:[0,”Cookie Preferences”],”header.valid_email_error”:[0,”Must be valid email.”],”search.result_stat_empty”:[0,”Results {search_range} of {search_total}“],”footer.connectivity_cloud”:[0,”Connectivity cloud”],”footer.developer_services”:[0,”Developer services”],”footer.investor_relations”:[0,”Investor relations”],”page.not_found.error_code”:[0,”Error Code: 404″],”search.autocomplete_title”:[0,”Insert a query. Press enter to send”],”footer.logos_and_press_kit”:[0,”Logos & press kit”],”footer.application_services”:[0,”Application services”],”footer.get_a_recommendation”:[0,”Get a recommendation”],”posts.social_buttons.reddit”:[0,”Discuss on Reddit”],”footer.sse_and_sase_services”:[0,”SSE and SASE services”],”page.not_found.outdated_link”:[0,”You may have used an outdated link, or you may have typed the address incorrectly.”],”footer.report_security_issues”:[0,”Report Security Issues”],”page.error.error_message_page”:[0,”Sorry, we can’t find the page you are looking for.”],”header.subscribe_notifications”:[0,”Subscribe to receive notifications of new posts:”],”footer.cloudflare_for_campaigns”:[0,”Cloudflare for Campaigns”],”header.subscription_confimation”:[0,”Subscription confirmed. Thank you for subscribing!”],”posts.social_buttons.hackernews”:[0,”Discuss on Hacker News”],”footer.diversity_equity_inclusion”:[0,”Diversity, equity & inclusion”],”footer.critical_infrastructure_defense_project”:[0,”Critical Infrastructure Defense Project”]}],”localesAvailable”:[1,[]],”footerBlurb”:[0,”Cloudflare’s connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.
Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.
To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.”]}” ssr client=”load” opts=”{“name”:”Post”,”value”:true}” await-children>
2025-04-03
6 min read
When building a full-stack application, many developers spend a surprising amount of time trying to make sure that the various services they use can communicate and interact with each other. Media-rich applications require image and video pipelines that can integrate seamlessly with the rest of your technology stack.
With this in mind, we’re excited to introduce the Images binding, a way to connect the Images API directly to your Worker and enable new, programmatic workflows. The binding removes unnecessary friction from application development by allowing you to transform, overlay, and encode images within the Cloudflare Developer Platform ecosystem.
In this post, we’ll explain how the Images binding works, as well as the decisions behind local development support. We’ll also walk through an example app that watermarks and encodes a user-uploaded image, then uploads the output directly to an R2 bucket.
The challenges of fetch()
Cloudflare Images was designed to help developers build scalable, cost-effective, and reliable image pipelines. You can deliver multiple copies of an image — each resized, manipulated, and encoded based on your needs. Only the original image needs to be stored; different versions are generated dynamically, or as requested by a user’s browser, then subsequently served from cache.
With Images, you have the flexibility to transform images that are stored outside the Images product. Previously, the Images API was based on the fetch()
method, which posed three challenges for developers:
First, when transforming a remote image, the original image must be retrieved from a URL. This isn’t applicable for every scenario, like resizing and compressing images as users upload them from their local machine to your app. We wanted to extend the Images API to broader use cases where images might not be accessible from a URL.
Second, the optimization operation — the changes you want to make to an image, like resizing it — is coupled with the delivery operation. If you wanted to crop an image, watermark it, then resize the watermarked image, then you’d need to serve one transformation to the browser, retrieve the output URL, and transform it again. This adds overhead to your code, and can be tedious and inefficient to maintain. Decoupling these operations means that you no longer need to manage multiple requests for consecutive transformations.
Third, optimization parameters — the way that you specify how an image should be manipulated — follow a fixed order. For example, cropping is performed before resizing. It’s difficult to build a flow that doesn’t align with the established hierarchy — like resizing first, then cropping — without a lot of time, trial, and effort.
But complex workflows shouldn’t require complex logic. In February, we released the Images binding in Workers to make the development experience more accessible, intuitive, and user-friendly. The binding helps you work more productively by simplifying how you connect the Images API to your Worker and providing more fine-grained control over how images are optimized.
Extending the Images workflow
Since optimization parameters follow a fixed order, we’d need to output the image to resize it after watermarking. The binding eliminates this step.
Bindings connect your Workers to external resources on the Developer Platform, allowing you to manage interactions between services in a few lines of code. When you bind the Images API to your Worker, you can create more flexible, programmatic workflows to transform, resize, and encode your images — without requiring them to be accessible from a URL.
Within a Worker, the Images binding supports the following functions:
-
.transform()
: Accepts optimization parameters that specify how an image should be manipulated -
.draw()
: Overlays an image over the original image. The overlaid image can be optimized through a childtransform()
function. -
.output()
: Defines the output format for the transformed image. -
.info()
: Outputs information about the original image, like its format, file size, and dimensions.
The life of a binding request
At a high level, a binding works by establishing a communication channel between a Worker and the binding’s backend services.
To do this, the Workers runtime needs to know exactly which objects to construct when the Worker is instantiated. Our control plane layer translates between a given Worker’s code and each binding’s backend services. When a developer runs wrangler deploy
, any invoked bindings are converted into a dependency graph. This describes the objects and their dependencies that will be injected into the env
of the Worker when it runs. Then, the runtime loads the graph, builds the objects, and runs the Worker.
In most cases, the binding makes a remote procedure call to the backend services of the binding. The mechanism that makes this call must be constructed and injected into the binding object; for Images, this is implemented as a JavaScript wrapper object that makes HTTP calls to the Images API.
These calls contain the sequence of operations that are required to build the final image, represented as a tree structure. Each .transform()
function adds a new node to the tree, describing the operations that should be performed on the image. The .draw()
function adds a subtree, where child .transform()
functions create additional nodes that represent the operations required to build the overlay image. When .output()
is called, the tree is flattened into a list of operations; this list, along with the input image itself, is sent to the backend of the Images binding.
For example, let’s say we had the following commands:
env.IMAGES.input(image)
.transform(rotate:90})
.draw(
env.IMAGES.input(watermark)
.transform({width:32})
)
.transform({blur:5})
.output({format:"image/png"})
Put together, the request would look something like this:
To communicate with the backend, we chose to send multipart forms. Each binding request is inherently expensive, as it can involve decoding, transforming, and encoding. Binary formats may offer slightly lower overhead per request, but given the bulk of the work in each request is the image processing itself, any gains would be nominal. Instead, we stuck with a well-supported, safe approach that our team had successfully implemented in the past.
Meeting developers where they are
Beyond the core capabilities of the binding, we knew that we needed to consider the entire developer lifecycle. The ability to test, debug, and iterate is a crucial part of the development process.
Developers won’t use what they can’t test; they need to be able to validate exactly how image optimization will affect the user experience and performance of their application. That’s why we made the Images binding available in local development without incurring any usage charges.
As we scoped out this feature, we reached a crossroad with how we wanted the binding to work when developing locally. At first, we considered making requests to our production backend services for both unit and end-to-end testing. This would require open-sourcing the components of the binding and building them for all Wrangler-supported platforms and Node versions.
Instead, we focused our efforts on targeting individual use cases by providing two different methods. In Wrangler, Cloudflare’s command-line tool, developers can choose between an online and offline mode of the Images binding. The online mode makes requests to the real Images API; this requires Internet access and authentication to the Cloudflare API. Meanwhile, the offline mode requests a lower fidelity fake, which is a mock API implementation that supports a limited subset of features. This is primarily used for unit tests, as it doesn’t require Internet access or authentication. By default, wrangler dev
uses the online mode, mirroring the same version that Cloudflare runs in production.
See the binding in action
Let’s look at an example app that transforms a user-uploaded image, then uploads it directly to an R2 bucket.
To start, we created a Worker application and configured our wrangler.toml
file to add the Images, R2, and assets bindings:
[images]
binding = "IMAGES"
[[r2_buckets]]
binding = "R2"
bucket_name = ""
[assets]
directory = "./"
binding = "ASSETS"
In our Worker project, the assets directory contains the image that we want to use as our watermark.
Our frontend has a element that accepts image uploads:
const html = `
Upload Image
`;
export default {
async fetch(request, env) {
if (request.method === "GET") {
return new Response(html, {headers:{'Content-Type':'text/html'},})
}
if (request.method ==="POST") {
// This is called when the user submits the form
}
}
};
Next, we set up our Worker to handle the optimization.
The user will upload images directly through the browser; since there isn’t an existing image URL, we won’t be able to use fetch()
to get the uploaded image. Instead, we can transform the uploaded image directly, operating on its body as a stream of bytes.
Once we read the image, we can manipulate the image. Here, we apply our watermark and encode the image to AVIF before uploading the transformed image to our R2 bucket:
var __defProp = Object.defineProperty;
var __name = (target, value) => __defProp(target, "name", { value, configurable: true });
function assetUrl(request, path) {
const url = new URL(request.url);
url.pathname = path;
return url;
}
__name(assetUrl, "assetUrl");
export default {
async fetch(request, env) {
if (request.method === "GET") {
return new Response(html, {headers:{'Content-Type':'text/html'},})
}
if (request.method === "POST") {
try {
// Parse form data
const formData = await request.formData();
const file = formData.get("image");
if (!file || typeof file.arrayBuffer !== "function") {
return new Response("No image file provided", { status: 400 });
}
// Read uploaded image as array buffer
const fileBuffer = await file.arrayBuffer();
// Fetch image as watermark
let watermarkStream = (await env.ASSETS.fetch(assetUrl(request, "watermark.png"))).body;
// Apply watermark and convert to AVIF
const imageResponse = (
await env.IMAGES.input(fileBuffer)
// Draw the watermark on top of the image
.draw(
env.IMAGES.input(watermarkStream)
.transform({ width: 100, height: 100 }),
{ bottom: 10, right: 10, opacity: 0.75 }
)
// Output the final image as AVIF
.output({ format: "image/avif" })
).response();
// Add timestamp to file name
const fileName = `image-${Date.now()}.avif`;
// Upload to R2
await env.R2.put(fileName, imageResponse.body)
return new Response(`Image uploaded successfully as ${fileName}`, { status: 200 });
} catch (err) {
console.log(err.message)
}
}
}
};
We’ve also created a gallery in our documentation to demonstrate ways that you can use the Images binding. For example, you can transcode images from Workers AI or draw a watermark from KV on an image that is stored in R2.
Looking ahead, the Images binding unlocks many exciting possibilities to seamlessly transform and manipulate images directly in Workers. We aim to create an even deeper connection between all the primitives that developers use to build AI and full-stack applications.
Have some feedback for this release? Let us know in the Community forum.
Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.
To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.
DevelopersCloudflare WorkersCloudflare ImagesImage Optimization
April 02, 2025 1:00 PM
A steam locomotive from 1993 broke my yarn test
Yarn tests fail consistently at the 27-second mark. The usual suspects are swiftly eliminated. A deep dive is taken to comb through traces, only to be derailed into an unexpected crash investigation.…
March 25, 2025 1:59 PM
Build and deploy Remote Model Context Protocol (MCP) servers to Cloudflare
You can now build and deploy remote MCP servers to Cloudflare, and we handle the hard parts of building remote MCP servers for you.…
March 24, 2025 1:00 PM
New URLPattern API brings improved pattern matching to Node.js and Cloudflare Workers
Today we’re announcing our latest contribution to Node.js, now available in v23.8.0: URLPattern. …
March 20, 2025 1:10 PM
Introducing Cloudy, Cloudflare’s AI agent for simplifying complex configurations
Cloudflare’s first AI agent, Cloudy, helps make complicated configurations easy to understand for Cloudflare administrators.…