I recently shared an overview article about Stubbing External Services in Rails. I found it when looking for the best way to stub a pletora of services in a microservices environment. Sure, docker (or whatever) everything and run it locally / in your test suite. But unless you've plenty of disk- and memory space, this isn't always a viable option. The alternative: simulate the service. Mock or stub the endpoint.
The go to gems are Webmock, which catches request and allows you to define the responses explicitly and VCR, which allows you to record responses, and play back.
VCR is quite cool, but as it is a recorder of earlier responses, there may be a lot of noise to dig trough when trying to make the responses a bit more generic (dealing with random token requests and what else)
For testing I pers…
Prometheus is a statistics collecting tool that originated from SoundCloud. Designed to be used in high performance environments, it is build to be blazingly fast. Hence, the client typically is expected to be blazingly fast as well, gathering and presenting data within nanoseconds. For Ruby on Rails applications however this has lead to an unresolved issue with the Prometheus ruby-client when the same application is forked (typical for Puma, Passenger and other popular ruby-servers). The Prometheus client collects data within its own fork before serving it to the exporter endpoint. This can or cannot be a problem. When you measuring response times, running averages from a random fork may be good enough. However, when you're also counting data over time you're having separate counters in …
Still (2013) a great talk by Sandy Metz on testing, and how to do it right, without getting too theoretical. While this talk is on ruby, and it uses a Rails framework for testing, it really is applicable to any other language (only the syntax will probably be a wee bit shittier ;))
Watch Rails Conf 2013 The Magic Tricks of Testing by Sandi Metz on YouTube
(and while unit-testing is between brackets, in general, being a minimalist when writing code really is a good idea)
Being unable to find a really concise description at a stable endpoint, the "scripts"
-section of package.json
, central to node
/npm
/yarn
/euh. JavaScript-development these days, here it is.
The package.json
is a json file which typically contains things like version and name of a project, the typical metadata. It also contains the dependencies of a project. But this is about the scripts-section. Completely optional, but so convenient that typically your project has one already:
{
"name": "hello world",
"version": "0.0.1",
"main": "index.js",
"scripts": {
}
}
The "scripts"
-section contains snippets of code that you typically run using npm
or yarn
:
So let's say you're typing git add . && git commit -m "wip" && git push
a lot (totally not recommended), you could do:
{
"name": "hello world",
"version": "0.0.1",
"main": "index.js",
"scripts": {
"wipitup": "git add . && ...
I haven't had much need for isolating services, something that Docker is really good at. Most dependencies of my Rails apps are covered by Gemfile
anyway (and packages.json
for JavaScript) and reasonably isolated with rbenv
. I don't experience version related issues very often, as I try to stay reasonably up to date and not rely too much on the very state of the art. For temporary projects, however, Docker is a great solution, even for me :o I don’t want an entire JBoss suite running on my machine, so when I had to develop a Keycloak template I knew that Docker would be the right tool. While I’m going to discuss the specifics of getting started with a Keycloak image, the workflow described is replaceable by any other.
So what about Keycloak? Keycloak is a role based authorization and authentication tool …
I've been able to stay away from PHP-based projects for quite some time. Until recently. I needed a small API. The idea was that the API would be transferred to a relatively old server that had been running stable for years and the client didn't want to risk installing additional script interpreters on it. It might even have been my own suggestion, it would be a really small API, requiring no special changes on an already operational 24x7 managed server. On top of it I'd write a modern style front-end, running entirely in the browser.
Of course the API that was intended to be simple got a bit more complex. I wanted the API to output clean JSON messages, which required some data mangling, as data was stored in CSV's, TSV's, and misused XML-files or only accessible through crappy soapy API's. So what does present-day (well, the Red Hat PHP version I was able to use is still in 5.x-series) PHP look like? TL;DR: sometimes it was definitely ugly, but at I can happily live with the cod…
Eerder schreef ik al wat over technische schuld. Het niet hebben van automatische tests wordt vaak beschouwd als een technische schuld.
Testen doe je om er zeker van te zijn dat iets werkt dat het goed werkt. Automatische testen maak je (of laat je maken) omdat zeker weten dat het goed werkt veel tijd kost. Wanneer je applicatie vaak nog wordt veranderd wil je er immers ook zeker van zijn dat het ook blijft werken. Automatische tests zijn kleine programmaatjes die testen of onderdelen onafhankelijk (unit-tests) of in samenhang (integratie-tests) goed werken.
Bij unit-testen worden kleine onderdelen afzonderlijk bekeken of ze nog werken. Zo kan bijvoorbeeld steeds worden gecontroleerd of de bedragen in een offerte wel nog steeds netjes worden opgeteld, en een andere of er nog wel het verwachtte btw bedrag uit blijft komen.
Bij integratie-toetsen, of syst…
No worries, I do value testing. But test driven? It depends.
The last few weeks I’ve been working off and on building a crawler. The thing triggers a series of scheduled tasks that could run in parallel, generating (possibly) tasks (that are consequently scheduled again in their own task-specific queue) on its own and so on. The end goal is structured copies of external resources (read: webpages). But I’ve been stuck close to the start for quite a while, setting up the base architecture using the test driven development approach. And I’m failing, it’s going too slow :(
Like many developers, this is not my first parser/crawler. But this time I wanted to make a GoodParser™. Make it more extensible, and flexible and foremost robust, building in fault-tolerance from start. But I’m not getting near the end result and it is frustrating.
The TDD-school says make small victories, and oh yes, I had my sheer number of victories already. But the end is nowher…
My guiding principle in web-development is (still): Always make things work without (client side) JavaScript first.
Aside from offering a graceful degradation of the experience by progressively enhancing it leads to better code. Three reasons why:
Yes, I do shiver when I hear things like CSS in JS, KISS!
Photo by [Dmitry Baranovskiy](https://www.flickr.com/photos/dmitry-baranovsk…
In a previous post I described how simple integrating elasticsearch is with Rails for beginners. You could've been happy with the fact that you now have implemented full text search, but that too basic set up probably doesn't work that much better than adding a column to your model, throwing in all text in it and running a LIKE query (although elasticsearch does try to rearrange the results a bit).
In this post I will learn you two things that makes elasticsearch worth it.
Analyzers add some fuzziness to your searches. First, make sure your analyzer is in the right language, this will improve your results. You add the following bit to your model (I typically place it just below where the scopes and validation are defined).
settings index: { number_of_shards: 1 } do
...
I don't like complexity. Adding new items to your stack increases complexity. But sometimes it is worth it. When you need proper search and filtering, elasticsearch is worth it. Mostly because it isn't hard to set up at all, as you'll learn in this post.
Installing it on a Debian server is easy, simply follow their instructions (you'll add their package-server, and run apt-get install. On OS-X you can install it easily with HomeBrew (brew install elasticsearch
), but do make sure you have installed a JDK (e.g. openjdk-8-jre-headless
)
If you're not using something like Docker, you probably have to repeat the steps on your dev machine, your staging server and your production environment.
Note: When running on a low memory server, which isn't recommended for production, you should make sure that the configured heap size isn't too high, edit `/et…
Hyper agile development, yeah! Nee, ik ga het niet hebben over ontwikkeling voor embedded systems (systemen zoals pinautomaten tot magnetrons). Maar over ontwikkelen midden in een organisatie die werkt. Zoals journalisten in het leger, op die manier embedded. Niet Kennismaking, Ontwerp, Bouw, Testen, en uiteindelijk de Grote Oplevering (waarbij iedere stap een vertaalslag is zonder de bron om op terug te vallen). En zelfs niet het iteratieve ontwikkelen zoals dat in Scrum gangbaar is. Nee, iedere dag te maken krijgen met verzoeken: “Nu wil ik dit graag kunnen.” “Kun je dit voor mij doen.” En dan beslissingen nemen. Dat is embedded development voor mij.
Hyper agile. Beetje (erg) hectisch, maar daarom des te leuker. M’n code is van tijd tot tijd een rotzooi, in de loze minuutjes proberen op te schonen. En het moet blijven draaien, want ieder moment kan er een nieuw verzo…
Although in mainstream media hacking is considered as something bad, something criminals would do, hacking has really nothing to do with bad things. Hacking in software is about building a bazaar instead of building a cathedral (Raymond, 1999). Hacking is central to the idea of agile programming and the free software movement. While building a cathedral is about planning things properly, the bazaar way is the shacky way of hacking things together. Being able to build something cheaply, quickly. The it-just-works approach. Others counter this notion of hacking it together as something being unstable. In the long run, they argue, hacking will be more expensive. But think about it… how often have you worked on something great, something really complex, something you’ve tried to release perfectly, and a) how often have you succeeded in actually releasing something and b) how much better has it become through this careful process of discussing, planning and crafting - doing it the right …
Dit artikel van murblog van Maarten Brouwers (murb) is in licentie gegeven volgens een Creative Commons Naamsvermelding 3.0 Nederland licentie .