fredag 18 juni 2021

Write a ray tracer guided by Jamis Buck's test suite

Background

Having seen people write their own ray tracers, I've been curious about writing my own. And for a while I also believed I just could sit down and write one, without any help, until I tried. At least I failed fast :)

Fast forward a few years to the moment when I listened to the podcast Developer On Fire, where the interviewer asks the interviewee for book recommendations and the book The Ray Tracer Challenge - A Test-Driven Guide to Your First 3D renderer by Jamis Buck is mentioned. I bought it and started coding.


About the book
The book explains the theory you need and contains unit tests in Gherkin, which means that you can translate them to whichever programming language you want to use. 
Here's an example of a test that subtracts a point from another, which should result in a vector:


I used C# and NUnit, which made my test implementation look like this:


The algorithms that make the tests pass are written in pseudocode, so you have to translate them as well.

What I learned
Test first
I really liked the test first way of coding. It catched errors early which made it easier to spot and correct the bugs. It was good to build up a safety net of tests, because the code got pretty math intense and would be pretty hard to debug without them.

Funny / not funny
The fun part was the coding and to see the resulting images. But creating sceneries and trying out different settings was not that fun, which makes it pretty useless to have my own ray tracer to play around with...

Recreational coding
Jamis Buck has also written a book about how to generate mazes, something he did when recovering from a burn out. He's sharing that story in the pod Corecursive.

Examples of created images

Here's two examples of images I created. 

Snowflakes
This is supposed to look like falling snowflakes. I made it after learning how to create cylinders and cones and how to group simple shapes together.



The making of a kitchen table

Here's after learning to do cubes and planes and manipulate them, like scaling in different directions, move them and give them color and patterns. Also tried to do a kitchen lamp, but found out that the shadow logic was too simple so that a transparent object shadows as much as an opaque object.

Yes, I had at hard time positioning the table legs :)


The code

It took me quite a while to finish the book. The commit history shows that I began in June 2019 and finished in June 2021. I worked with it in bursts and often had long periods of inactivity.

I skipped the code for matrix manipulation, I used the MathNet.Numerics nuget package instead to faster advance to the chapters that resulted in images.

My github repo for this project.

When searching for The ray tracer challenge at github, I get 254 repo hits in 10 languages, where the most are in Rust (51), C++ (48), C# (32) and Go (17).

After finishing the book I also noticed that people have posted their learning journey on youtube, like this playlist.

söndag 25 april 2021

A SourceTree lover sees Visual Studio catching up

Background

This was supposed to be a post about features that I find important for the daily work I do that the git gui SourceTree has and that Visual Studio's git tool lacks. But instead I learned that Visual Studio is catching up with its Visual Studio 2019 16.10 Preview 2.1 release!

File change preview for historic commits

Navigating the commit history and looking at file changes in Visual Studio is a pain, but today I downloaded Visual Studio 2019 16.10 Preview 2.1 and explored the news in its git tool. Looking at changes in historic commits is no longer a pain!

I also learned that Visual Studio can show diffs as inline or side-by-side.


SourceTree shows all branches in history

In SourceTree you see ALL branches by default, which is the way I like it because I get the full latest commit history context immediately. This isn't possible in Visual Studio, you have to select the branch that has the latest commit to be able to see the history for all branches.


If you ever would like to not see all branches in SourceTree, you can easily turn it off by selecting Current Branch in the dropdown, like this.


To show all branches is a feature under review by Microsoft 
https://developercommunity.visualstudio.com/t/how-do-i-view-the-history-of-all-branches-in-git/934801

Merges are shown better in SourceTree

I find the branch history tree to be more clear in SourceTree. Compare these two images that pictures a branch branched off of master, and then the branch is merged back to master.

In SourceTree it looks just like that, a branch going out from master and then coming back to master.


In Visual Studio it looks like a branch has been branched off from master, but then it looks like master has been merged to the branch!? When merging, the merge commit has two parents instead of one. There is a primary parent (the branch merged to) and a secondary parent (the branch merged from) and VS seems to mix them up.


I added a ticket regarding this.

But the image below, taken from this ticket hints about a rewrite of the the commit history tree visualization in VS. No merge is shown, but the tree looks different, and hopefully it will be easier to follow.



SourceTree auto-adds commits message for merge with conflicts

When doing a merge and the merge results in conflicts, a commit message like this the one below is added automatically by SourceTree. Visual Studio adds nothing. 
Merged ‘master’ into 'a_feature_branch'. Conflicts: # Foo.h # Bar.cpp

To be able to see which files that have had conflicts can be of help when you know that a feature has been added in a file, but that feature suddenly is gone or is buggy, because it probably disappeared in a manual merge conflict resolution that went wrong.

This also seems to be a feature under review.

In SourceTree you can pull a branch that isn't currently active

Yes, you can, and I thought it was something that made my branch handling smoother, but when writing this I can't come up with a scenario where it is needed.

onsdag 3 mars 2021

E-handla kläder med din digitala tvilling


Att e-handla kläder kan vara svårt, bland annat att gissa vilken storlek som passar bäst. Storleksangivelsen ger en fingervisning, men är långt ifrån en sanning om plagget kommer passa dig, eftersom storlekar kan skilja från tillverkare till tillverkare.

Och om du är osäker på storlek så kanske du beställer flera olika storlekar av samma plagg och skickar tillbaka resten. Det är det många som gör. En himla massa paketskickande blir det totalt, vilket är ett slöseri. 

För att lösa en del av den här problematiken så har Knowit skapat tjänsten Zizr, https://zizr.id/, en tjänst där du kan ange vad du köpt och vilken storlek som passade dig. Utifrån dina köp och en massa andra klädköpares köpdata så kommer Zizr kunna hitta andra som har din storlek och därmed kunna ge storleksrekommendationer på kläder som de andra köpt och angett att de passar. 

Dina digitala kroppsstorlekstvillingar behöver inte vara helkroppstvillingar, utan kan vara delkroppstvillingar, eftersom matchningen delas upp i fötter, underkropp och överkropp.

Det här är en existerande tjänst, dock är den än så länge bara i beta-stadie och har samarbete med endast två butiker, Junkyard och Kleins, men planen är såklart att knyta till sig så många butiker som möjligt.

Jag har skapat konto och skummat de två klädbutikerna. Än så länge har det inte blivit nåt köp, men jag gillar idén skarpt! Konceptet digitala tvillingar känns underutnyttjat, det är ett kraftfullt verktyg som skulle kunna användas för så mycket mer. Men såklart, det är inte helt oproblematiskt med att dela med sig av sin personliga data inom alla områden.

Här kan du titta på en presentation om hela utvecklingsresan av Zizr.


tisdag 22 december 2020

Det går troll i mjukvarulitteraturen

 Gamla "sanningar"

Jag har läst en hel del mjukvarulitteratur och märkt att en del exempel är återkommande och anges med referenser till forskning. Jag minns inte att jag någonsin tidigare ställt mig frågan Kan det här verkligen vara sant? och börjat gräva i referenser. Till exempel läste jag "Facts and Fallacies of Software Engineering", en bok med en förtroendeingivande titel och en inledning som hyllar forskning, vilket invaggade mig i nån slags tro att den här författaren, han har gjort sitt undersökningsjobb. Men nu verkar det inte bättre än att han ramlat i samma grop som så många andra.

Om det är nån mer som är lika "lättlurad" som jag så skulle jag vilja rekommendera en bok som är något av en ögonöppnare "The Leprechauns of Software Engineering: How folklore turns into fact and what to do about it", för där har författaren verkligen följt referenser och försökt hitta källan till flera av de här återkommande exemplen. Du kanske har hört talas om att forskning visar att det finns en stor skillnad, upp till 28 gånger, i produktivitet mellan olika programmerare? Eller sett The cone of uncertainty, som sägs beskriva osäkerheten i projektestimat vid olika tidpunkter av projektet?

Bilden tagen från https://www.construx.com/books/the-cone-of-uncertainty/

Vad tror du att författaren Laurent Bossavit hittar när han följer spåren av referenser för nyss nämnda exempel, och ett par till, allt djupare? Jo, till exempel att:
  • the papers are not really empirical research
  • the papers support weaker versions of the claim
  • the papers don’t support the claim directly, but only cite research that does
  • the more recent papers are not original research, but only cite older ones
  • the papers are in fact books or book-length, and you’ll be looking for a needle in a haystack
  • the papers are obscure, hard to find, out of print or paywalled, and thus hard to verify
  • the papers are selected only on one “side” of an ongoing controversy

Som en som läst om de här exemplen återkommande gånger så tyckte jag det här var en riktigt intressant bok! Och jag har börjat bli lite mer ifrågasättande av författares referenshantering och även av forskningsresultat och försökt gräva själv några gånger. Slarv med källor och referenser verkar inte bara vara nåt som görs i mjukvarulitteratur, det förekommer nog överallt. Till exempel det här med hur juridiska domare dömer vid olika tider på dagen.



Med lite referenser så verkar allmänt kända sanningar kunna skapas :)
Early results were often criticized, but decades of research have now accumulated in support of the incontrovertible fact that bugs are caused by bugproducing leprechauns who live in Northern Ireland fairy rings. (Broom 1968, Falk 1972, Palton-Spall 1981, Falk & Grimberg 1988, Demetrios 1995, Haviland 2001)


lördag 12 december 2020

Ger domare oftare en fällande dom när de är hungriga?

Välkänd studie

Du kanske har hört talas om studien som tittar på mönstret hur juridiska domare dömer under olika tider på dagen, att de oftare ger fällande domar när de är hungriga före sina måltider? Den är citerad i flertalet böcker, bland annat i:

  • Thinking, Fast and Slow
    Delar upp hjärnan i två system, System 1 som är snabbt men slarvigt och System 2 som är korrekt men energikrävande. Studien passar in i tankarna i boken om att när energinivåerna är låga så sätter inte System 2 igång och System 1 tillåts ta slarviga beslut.

  • Black box thinking
    Handlar om förbättringsarbete och hur det försvåras när berörda personers självbild krockar med fakta, t ex en domares självbild som ofelbar. Studien används för att påvisa behovet av förbättringar inom rättsväsendet.

  • Life 3.0, Being Human in the Age of Artificial Intelligence
    Tar upp studien som ett exempel på var en AI skulle kunna göra ett bättre jobb, en robot-domare.

Jag tycker att det låter både troligt och otroligt på samma gång, men är studien tillförlitlig?

Om studien Extraneous factors in judicial decisions

Tre forskare, Shai Danziger, Jonathan Levav och Liora Avnaim-Pesso ville undersöka om det fanns någon sanning i talesättet justice is what the judge ate for breakfast. Är domare rationella eller påverkas de av yttre juridiskt ovidkommande faktorer som hunger eller mental utmattning när de dömer.

Deras forskningsartikel publicerades 2011 och handlade om en 10 månader lång studie där data samlats in under 50 dagar och täckte 1 112 domar, dömda av åtta domare. Domarna presiderade i två olika rättegångsnämnder för villkorlig frigivning som nyttjades av fyra större fängelser i Israel.

Dagarna delades upp i tre sessioner, med två måltider emellan. En domare dömde från 14 fall upp till 35 fall per dag, där ett fall i medel tog cirka 6 minuter.

Det de upptäckte var att andelen friande domar i början av varje session började på cirka 65 procent, för att sedan sjunka under sessionens gång och i slutet komma ner till nästan 0 procent friande domar! 
En fånge skulle ha 35 gånger större chans att få villkorlig frigivning om denne kommer först istället för sist i en session, enligt Andreas Glöckner.

Proportion of rulings in favor of the prisoners by ordinal position. Circled points indicate the first decision in each of the three decision sessions; tick marks on x axis denote every third case; dotted line denotes food break. Because unequal session lengths resulted in a low number of cases for some of the later ordinal positions, the graph is based on the first 95% of the data from each session.

Kan siffrorna stämma?

Studien har blivit populär och välkänd, men att "hungereffekten" eller "utmattningsseffekten" skulle ha så stor påverkan är det flera som har reagerat på. En motreaktion som inte alls fått samma genomslag.

Keren Weinshall-Margel och John Shapard har besvarat forskningsartikeln med ett brev, Overlooked factors in the analysis of parole decisions, där de - efter att ha intervjuat tre försvarsadvokater, en domare och fem fängelseanställda - tar upp några faktorer de tänker har förbisetts som kan ha påverkat utfallet.

Studien gjorde gällande att fallens ordning kom i slumpartad ordning. Men under intervjuerna framkom att det fanns flera saker som påverkade fallens ordning. Till exempel att alla fångar från ett fängelse skulle hinnas med innan man tog rast och efter rasten fortsatte man med fångar från ett annat fängelse. Inom varje session så var det vanligt att fallen med fångar som hade advokat hanterades före de som saknade advokat. Att företrädas av en advokat ökar sannolikheten till bifall från 15% till 35%.

En annan faktor var att både fall med avslag och fall som sköts upp räknades som avslag. Uppskjutning av fall förekom oftare senare i en session. Forskarna försvarade det med att för domaren innebar beslut om uppskjutning samma sak, att status quo bibehölls, ett lättare beslut att ta om man är trött. Totalt avslogs 64,2% av fallen, varav 48,4% var uppskjutningar.

Andreas Glöckner har också ifrågasatt studien och gjort simuleringar som påvisar att effekterna är övervärderade. En orsak han lyfter som påverkar de sjunkande kurvorna är att olika många fall hinns med i olika sessioner, vilket leder till att gruppstorleken minskar ju senare i en session de har hanterats. Så de domar för de senare fallen i en session med många fall får högre genomslag. Lägger man ihop det med att fall för fångar utan advokat kommer sist och att de fallen oftare får avslag så blir det i sig en sjunkande kurva.

Danïel Lakens avfärdar resultaten baserat på den orealistiska storleken av effekten med:
If hunger had an effect on our mental resources of this magnitude, our society would fall into minor chaos every day at 11:45 a.m. Or at the very least, our society would have organized itself around this incredibly strong effect of mental depletion. Just like manufacturers take size differences between men and women into account when producing items such as golf clubs or watches, we would stop teaching in the time before lunch, doctors would not schedule surgery, and driving before lunch would be illegal. If a psychological effect is this big, we don’t need to discover it and publish it in a scientific journal—you would already know it exists. Sort of how the “after lunch dip” is a strong and replicable finding that you can feel yourself (and that, as it happens, is directly in conflict with the finding that judges perform better immediately after lunch—surprisingly, the authors don’t discuss the after lunch dip).

Ja, vad ska man tro? Forskning och statistik verkar iallafall komplext :)

Referenser

Forskningsartikeln
Extraneous factors in judicial decisions 

Brevet
Overlooked factors in the analysis of parole decisions

Forskarnas svar på brevet
Reply to Weinshall-Margel and Shapard: Extraneous factors in judicial decisions persist

Andreas Glöckners simuleringar
The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated

Danïel Lakens bloggpost
Impossibly Hungry Judges

söndag 22 november 2020

Kanban for Software Development consists of three rules only (my memory told me anyway)

The three rules (in my head)

Kanban used for software development has in my mind been built on the three rules below.
  • Visualize the workflow
  • Limit Work In Progress (WIP)
  • Measure the lead time
And depending on the situation the team exists in, the rules can be narrowed down even more. Like, if the predictability isn't a big concern, the rule about measuring lead time can be left out. 
And also, if team members are conscientious enough about not starting too much new work before finishing already initiated work, the WIP limits don't seem to add much.
In that case, the only thing needed is to visualize your workflow. But if you don't do that either, then I find it hard to get anything out of Kanban.

So, that's the reason that one day when it became obvious that the tasks on my team's Kanban board was out of sync with reality, I told the developers:
- "If we don't keep the Kanban board in sync with reality, we follow none of the three rules only of Kanban!" 
One of them responded teasingly:
- "Well, what do you expect from a bunch of we'll-do-as-we-pleases?"
(The word used was "rättshaverister", but I can't translate that into a word that describes what I think the person meant).

There was no further discussion and the board was adjusted after a while. But having referred to rules brought up from my memory, I also remembered that when reading about them a few years ago I hadn't got a good grip on where they come from, what their original source is. So, before banging them in anyone's head again, it was time for a bit of research :)

The closest source: Kanban and Scrum, making the most of both

I remembered that I read about the rules in the book Kanban and Scrum, making the most of both by Henrik Kniberg and Mattias Skarin. 

So I began with trying to find more info there. There they were, with a short explanation for each rule. But I lost the word "rules", because the only thing the book says about them is "Kanban in a nutshell" and then lists the three points. Not a word about "rule".



Next up: Crisp's homepage

Both Henrik Kniberg and Mattias Skarin works at Crisp, so I took a look at Crisp's homepage. And yes, the same points are mentioned there, with just a touch of more context: 
"There are many flavors, but the core of Kanban means:"
Ok, so there are different flavors?! Like "Standards are good, everyone should have their own."?



On the page I also found this:
At Toyota, Kanban is the term used for the visual & physical signaling system that ties together the whole Lean Production system. Most agile m­ethods such as Scrum and XP are already well aligned with lean principles. In 2004, however, David Anderson pioneered a more direct implementation of Lean Thinking and Theory of Constraints to software development. Under the guidance of experts such as Don Reinertsen, this evolved into what David called a ”Kanban system for software development”, and which most people now simply refer to as ”Kanban”.
Wow, a pioneer, David Anderson seemed as a promising track to research further!

David Anderson, a pioneer

Crisp's homepage linked to the book Kanban: Successful Evolutionary Change for Your Technology Business by David Anderson. 

"If there is any book that will bring clarity to this, it will be this one!", I thought. 

What I found was:
Kanban uses five core properties to create an emergent set of Lean behaviors in organizations. These properties have been present in every succesful implementation, including the one at Microsoft described in chapter 4.
The five properties are:
  • Visualize Workflow
  • Limit Work-in-Progress
  • Measure and Manage Flow
  • Make Process Policies Explicit
  • Use Models to Recognize Improvement Opportunities
Ok... so "Properties", not "Rules". 

But five instead of three? The first three properties maps well to the ones mentioned before, but what do the two extra mean and why did Henrik and Mattias drop them? 

What the two "new" Kanban properties means

Make process policies explicit
This can be done by agreeing on and writing down the "Defintion of Done" policies you use between the different stages in your process. and perhaps putting them up on your board for good visibility.

Use models to recognize improvement opportunities
Your process can probably be improved by reducing "waste" or finding and act on bottlenecks. There are models for doing that, as David says:
Common models in use with Kanban includes the Theory of Constraints, Systems Thinking, an understanding of variability through the teachings of W. Edwards Deming, and the concept of muda (waste) from the Toyota Production System. The models used with Kanban are continually evolving, and ideas from other fields, such as sociology, psychology, and risk management are appearing in some implementations.

Why were they dropped?

David's book predates Henrik's, why has the number of properties gone from five to three? I asked that question on Crisp's Facebook page. None of the original authors answered me, but another author did, he pointed me to the next book, Kanban in action.

Others have been pondering this too in Kanban in Action

At first, reading the book Kanban in Action by Marcus Hammarberg and Joakim Sundén made it even
messier! But also revealed interesting info.

There is a section titled
Three principles? I thought it was five properties. Or was it six practices? 
which indicates that others have had a hard time finding their way to getting clarity about this. That section contains the text below.
The three basic principles we describe in this section make up the foundation that kanban is based on. Recently, David J. Anderson and others have extended the three basic principles to five properties and later six practices; these are now referred to as the core practices.

Anyway, this info and Google led me to David Anderson's latest defintion, called Principles and General Practices of Kanban.

A blog post about Principles and General Practices of Kanban

In March 18, 2020, David Anderson added a post on his blog, from where I've copied the text below. In his post he also examines each point a bit deeper.

In my book, Kanban – Successful Evolutionary Change for your Technology Business, I identified what I called the “5 core properties” that I’d observed to be present in each successful implementation of Kanban.

Since the book was published, we’ve expanded this list and they are now known as the Principles and General Practices of Kanban.

Principles of the Kanban Method …
  • Start with what you do now
  • Agree to pursue incremental, evolutionary change
  • Respect the current process, roles, responsibilities & titles
  • Encourage acts of leadership at all levels in your organization
General Practices of the Kanban Method …
  1. Visualize (the work, workflow and business risks)
  2. Limit WIP
  3. Manage Flow
  4. Make Process Explicit
  5. Implement Feedback Loops
  6. Improve Collaboratively, Evolve Experimentally (using models & the scientific method)


But Three, where did Three come from?

Well, I can't tell for sure. I've already put too much research effort into this and I think I can live without knowing that, although it doesn't feel like full closure... :) Anyway, now I know more about the  history of Kanban used in software development, which I hadn't if I hadn't tried finding the root of the Three Kanban rules in my mind.

Please, do leave a comment if you have interesting info regarding this! :)