I'm nitpicking a bit on code naming...

An article, posted 8 days ago filed in ruby, development, software, naming & Martin Fowler.

Today's review had a final note: I'm nitpicking a bit on code naming. The approaches you take are good, but naming is hard:

> There are only two hard things in Computer Science: cache invalidation and naming things.

(see here a list of variations on these two hard problem 'jokes' in Computer Science)

I enjoy the ruby programming language because you can get a long way just assuming things. They might say, never assume things, because a lot of things are chaos, but I prefer to work from assuming things, and if it doesn't match to my assumption choose:

  1. fixing the assumption (perhaps by making the method name more clear)
  2. make it so that what I assumed to be working works (and I never ever have to think about or bother by it anymore)

… and/or maybe, I am just wrong and I should level up my understanding of things. We all have our internal models of how things work, some…

Continue reading...

Should I be upgrading all my dependencies on a regular basis?

An article, posted 6 months ago filed in engineering, software, google, security, gems, programming & development.

For projects I maintain, I try to keep dependencies up to date on a regular basis. But not all people work like that, some live by the adage of "if it ain't broken don't fix it", but that is not an approach I subscribe to in software development.

A common reason to update software dependencies is to fix security issues or bug fixes that plague the project at hand. My main argument in favour of making more frequent updates is that when you suddenly need to make an update (because of an imminent security threat) it won't be hard; when dependencies haven't been updated in a long time it can be hard to to make the update.

There are risks involved in updating dependencies: A new version might introduce breaking changes, things that you rely on suddenly don't work or exist anymore. It might even introduce new bugs that may not be apparent on the first run. And when your test suite is not on par, verifying if everything works as expected is time consuming. But that can all be address…

Continue reading...

(Web/View) Components: Should everything be a component?

An article, posted about one year ago filed in components, design system, structure, todo, programming, development, front-end & html.

Recently I was reviewing a merge request of some front-end code, and a simple div, that changed a bit of the custom appearance of a block of text through a few custom classes, was changed in a call to a view component that then applied the same classes, passed onto the component through a more deeply nested hash.

> -
> + ’itis ](https://en.wiktionary.org/wiki/divitis) you get ‘TextComponent’-itis.. we don’t add better semantic or structural information to the page layout

Instead?

Keep the code as is. And perhaps create a ticket (or annotate it with TODO:) that you perhaps want to extract this ‘custom-class’ call into a true reusable component. While I don’t think there is something inherently bad with offering the option to override or add some custom classes to a component, a component should only be used if it adds structural meaning either from the developer's side of things or (better) from the consumer’s side of things (e.g. semantic output that can be p…

Continue reading...

A local .test domain for development with https using Puma-dev (on macOS or Linux)

An article, posted about 3 years ago filed in development, server, rails & local.

When you maintain a few projects locally developing against localhost works good enough. npm start or rails s or python manage.py runserver or php -S 127.0.0.1:8000 will boot up a server that binds to a local port and allows you to see your work locally. The advantage of using localhost is that you don't have to bother with https-traffic as browsers don't require https for their latest features, but sometimes you need different domains to test and running multiple services distinguished by nothing more than their port numbers can become hard to manage.

To address this problem not only for websites served by the puma server, puma-dev exists. It is a spiritual successor to Sam Stephenson's Pow, which solved this problem for rack-apps. puma-dev, however, can proxy other servers as well, whether these are written in Javascript, PHP, ruby or other languages; as long as these exposes a port to 127.0.0.1, your local loopback/host you can use…

Continue reading...

Een pragmatisch succesverhaal

An article, posted about 3 years ago filed in pragmatic, pragmatisch, murb, werk, html, css, development, php, react, projectmanagement, keycloak & knmi.

Sorry, even wat borstklopperij, maar ben wel een beetje trots hierop. Enkele jaren geleden werd ik door een oud collega van mij geïnformeerd: hij had gehoord over een project waarvoor ze eigenlijk iemand zochten met mijn profiel (mijn naam was zelfs genoemd). Interaction design achtergrond, in staat zelfstandig een UI neer te zetten op basis van moderne standaarden. De opdracht: een oude extranet applicatie, met een historie uit begin jaren nul even een nieuwe smoel geven (en responsive maken). De opdracht werd gepubliceerd, ik reageerde, en uiteindelijk werd besloten dat ik deze mocht uitvoeren.

Het bleek te gaan om een verzameling oude stijl PHP en CGI scripts. Met een op framesets gebaseerde layout, zoals je dat rond 2000 wel vaker zag. Ik ken veel collega’s die liever wegrennen bij een dergelijke opdracht, zoveel oude code, zoveel historie, maar ik had het idee dat ik het wel kon doen. Een aanbestedingstraject voor een volledige nieuwbouw bleek jaren geleden mislukt omdat de …

Continue reading...

Using your ruby-webmock configuration for your local test service

An article, posted more than 5 years ago filed in docker, development, rails, ruby, VCR, testing, resources, laptop & offline.

I recently shared an overview article about Stubbing External Services in Rails. I found it when looking for the best way to stub a pletora of services in a microservices environment. Sure, docker (or whatever) everything and run it locally / in your test suite. But unless you've plenty of disk- and memory space, this isn't always a viable option. The alternative: simulate the service. Mock or stub the endpoint.

VCR and Webmock

The go to gems are Webmock, which catches request and allows you to define the responses explicitly and VCR, which allows you to record responses, and play back.

VCR is quite cool, but as it is a recorder of earlier responses, there may be a lot of noise to dig trough when trying to make the responses a bit more generic (dealing with random token requests and what else)

For testing I pers…

Continue reading...

De eerste vier zaken op een (macOS) ontwikkelmachine voor beginners

An article, posted more than 5 years ago filed in development, help, macos, system, configuration, php, python, ruby, vscode, sublimetext, editor, docker & homebrew.
  1. Update eerst naar de laatste versie van ’t OS, Mojave. Je kunt deze gratis downloaden in de App store, zie upgrade instructies voor Mojave.
  2. Installeer homebrew … macOS Terminal (zeg maar de Command Prompt van de Mac) vind je door Cmd+Spatie in te drukken en vervolgens "Terminal" te zoeken (meestal vind je die al na de eerste paar letters). Vervolgens de regel invoeren (kopiëren & plakken) die de website vermeld. Soms moet je extra dingen installeren; het script zal je daar doorheen leiden. Overigens, dat commando, Cmd+Spatie, opent wat Spotlight heet, ik vind dat de gemakkelijkste manier om programma’s te starten.
  3. Install Docker for mac (je hebt hier tegenwoordig helaas een account bij DockerHub voor nodig). Dit download een DiskImage, sleep het programma naar de programma’s map (zoals het image waarschijnlijk ook al aangeeft in de achterg…

Continue reading...

Prometheus for slow stats

An article, posted more than 5 years ago filed in development, engineering, cluster, management, devops, rails, ruby on rails, ruby, logging & monitoring.

Prometheus is a statistics collecting tool that originated from SoundCloud. Designed to be used in high performance environments, it is build to be blazingly fast. Hence, the client typically is expected to be blazingly fast as well, gathering and presenting data within nanoseconds. For Ruby on Rails applications however this has lead to an unresolved issue with the Prometheus ruby-client when the same application is forked (typical for Puma, Passenger and other popular ruby-servers). The Prometheus client collects data within its own fork before serving it to the exporter endpoint. This can or cannot be a problem. When you measuring response times, running averages from a random fork may be good enough. However, when you're also counting data over time you're having separate counters in …

Continue reading...

Be a (unit-testing) minimalist

An article, posted about 6 years ago filed in testing, rails, ruby, programming & development.

Still (2013) a great talk by Sandy Metz on testing, and how to do it right, without getting too theoretical. While this talk is on ruby, and it uses a Rails framework for testing, it really is applicable to any other language (only the syntax will probably be a wee bit shittier ;))

Watch Rails Conf 2013 The Magic Tricks of Testing by Sandi Metz on YouTube

(and while unit-testing is between brackets, in general, being a minimalist when writing code really is a good idea)

Continue reading...

Really concise guide to package.json scripts

An article, posted about 6 years ago filed in yarn, npm, package, javascript, development & scripts.

Being unable to find a really concise description at a stable endpoint, the "scripts"-section of package.json, central to node/npm/yarn/euh. JavaScript-development these days, here it is.

The package.json is a json file which typically contains things like version and name of a project, the typical metadata. It also contains the dependencies of a project. But this is about the scripts-section. Completely optional, but so convenient that typically your project has one already:

{
  "name": "hello world",
  "version": "0.0.1",
  "main": "index.js",
  "scripts": {
  }
}

The "scripts"-section contains snippets of code that you typically run using npm or yarn:

So let's say you're typing git add . && git commit -m "wip" && git push a lot (totally not recommended), you could do:

{
  "name": "hello world",
  "version": "0.0.1",
  "main": "index.js",
  "scripts": {
    "wipitup": "git add . && ...

Continue reading...

Developing Keycloak templates with Docker

An article, posted almost 7 years ago filed in docker, keycloak, template, javascript, development, css & html.

I haven't had much need for isolating services, something that Docker is really good at. Most dependencies of my Rails apps are covered by Gemfile anyway (and packages.json for JavaScript) and reasonably isolated with rbenv. I don't experience version related issues very often, as I try to stay reasonably up to date and not rely too much on the very state of the art. For temporary projects, however, Docker is a great solution, even for me :o I don’t want an entire JBoss suite running on my machine, so when I had to develop a Keycloak template I knew that Docker would be the right tool. While I’m going to discuss the specifics of getting started with a Keycloak image, the workflow described is replaceable by any other.

So what about Keycloak? Keycloak is a role based authorization and authentication tool …

Continue reading...

PHP revisited

An article, posted almost 7 years ago filed in ruby, php, bad, comparison, review, language, programming, experience, api & development.

I've been able to stay away from PHP-based projects for quite some time. Until recently. I needed a small API. The idea was that the API would be transferred to a relatively old server that had been running stable for years and the client didn't want to risk installing additional script interpreters on it. It might even have been my own suggestion, it would be a really small API, requiring no special changes on an already operational 24x7 managed server. On top of it I'd write a modern style front-end, running entirely in the browser.

Of course the API that was intended to be simple got a bit more complex. I wanted the API to output clean JSON messages, which required some data mangling, as data was stored in CSV's, TSV's, and misused XML-files or only accessible through crappy soapy API's. So what does present-day (well, the Red Hat PHP version I was able to use is still in 5.x-series) PHP look like? TL;DR: sometimes it was definitely ugly, but at I can happily live with the cod…

Continue reading...

Automatische tests

An article, posted more than 7 years ago filed in pragmatisch, bdd, tdd, development, failsafe, technisch ontwerp, programmeren, architectuur, schetsen, uitleg & testing.

Eerder schreef ik al wat over technische schuld. Het niet hebben van automatische tests wordt vaak beschouwd als een technische schuld.

Wat zijn automatische tests?

Testen doe je om er zeker van te zijn dat iets werkt dat het goed werkt. Automatische testen maak je (of laat je maken) omdat zeker weten dat het goed werkt veel tijd kost. Wanneer je applicatie vaak nog wordt veranderd wil je er immers ook zeker van zijn dat het ook blijft werken. Automatische tests zijn kleine programmaatjes die testen of onderdelen onafhankelijk (unit-tests) of in samenhang (integratie-tests) goed werken.

Integratie- en unittesten

Bij unit-testen worden kleine onderdelen afzonderlijk bekeken of ze nog werken. Zo kan bijvoorbeeld steeds worden gecontroleerd of de bedragen in een offerte wel nog steeds netjes worden opgeteld, en een andere of er nog wel het verwachtte btw bedrag uit blijft komen.

Bij integratie-toetsen, of syst…

Continue reading...

A developer’s status update: Test driven deadlock

An article, posted more than 7 years ago filed in bdd, tdd, test, driven, development, deadlock, writer's block, block, status, failsafe & architecture.

No worries, I do value testing. But test driven? It depends.

The last few weeks I’ve been working off and on building a crawler. The thing triggers a series of scheduled tasks that could run in parallel, generating (possibly) tasks (that are consequently scheduled again in their own task-specific queue) on its own and so on. The end goal is structured copies of external resources (read: webpages). But I’ve been stuck close to the start for quite a while, setting up the base architecture using the test driven development approach. And I’m failing, it’s going too slow :(

Small victories

Like many developers, this is not my first parser/crawler. But this time I wanted to make a GoodParser™. Make it more extensible, and flexible and foremost robust, building in fault-tolerance from start. But I’m not getting near the end result and it is frustrating.

The TDD-school says make small victories, and oh yes, I had my sheer number of victories already. But the end is nowher…

Continue reading...

Try not using Javascript first

An article, posted about 8 years ago filed in javascript, web, front-end, development, html, css, internet, rest & kiss.

My guiding principle in web-development is (still): Always make things work without (client side) JavaScript first.

Aside from offering a graceful degradation of the experience by progressively enhancing it leads to better code. Three reasons why:

  1. it forces you as a developer to think about logical endpoints for your form submits, your data requests etc. Typically this leads to fewer cases of overloading a resource with all kinds of unrelated functionality (yep, I'm a big REST-first advocate).
  2. your application will probably be more web-native, and hence more future proof, more easily cache-able, etc.
  3. the front-end JavaScript to enrich the experience will typically also be less complex and can be generalized more easily.

Yes, I do shiver when I hear things like CSS in JS, KISS!

Photo by [Dmitry Baranovskiy](https://www.flickr.com/photos/dmitry-baranovsk…

Continue reading...

murb blog