Alas, the final part of the series is done. In this part I'll address the technical and implementation aspects of my data visualization project. If you haven't already, I suggest you read the first and second parts before continuing with this post.
This decision posed a practical problem because as much as I like programming, it's currently not my strongest suit (after all, my field is design and not computer science) and sooner rather than later we started having difficulties. However, I have a getting-your-hands-dirty philosophy and started learning d3.js, a rather large and amazing data handling library for JS that uses the power of SVGs and CSS3 to create great visualizations.
Before that, I'd like to give a quick overview of the implementation process:
Following the previous image, we created fake data sets comprised of a value (the type of protein variation) and a number (the position in the chain of amino acids of the protein) and worked with d3 to create a very rough prototype:
The next step involved analyzing real data from the database. Ensembl offers two ways to extract data: through the dedicated REST API and downloading files manually via a special format called .GFF (General Feature Format). The design team decided to download the data for one (rather small) protein and analyzed it using a standard spreadsheet software. We realized that there were in fact three distinct sets of data (if you're interested, the exons, the domains and the actual protein variations) and that the common link between these three groups of data was the number of the amino acid in the consecutive chain of the protein.
These findings were important because it meant that there were possibly three different sub-visualizations that had to dynamically show information synchronously, (which with all certainty would become a technical challenge later on).
Once the data was examined, we had the opportunity to work with the amazing web developer Carlos Cruz who lent us his time and expert knowledge to create the final visualization tool. The tool was created using a JS framework and d3 and went through an undisclosed number of iterations, out of which one can be seen here:
We call this step "dynamic design" because the website was no longer comprised of static HTML but rather fetched a specific set of data in real time using a drop down menu and displayed visual graphics accordingly.
This was probably the most important and rewarding step of the whole project because we could finally start to see how the design proposed in the previous weeks started to work and showed complex data in an innovative and clearer way and thus validated our work as information visualization designers.
The final step not only involved making the visualization "look nice" but in reality meant applying important UI, UX and behavior principles to ensure an intuitive, clear and smooth user experience and usability.
This concludes the final step of the workflow. Of course that a lot can be done to improve the current state of the tool and it's far from perfect, but I believe it's a good start and a great exercise that exemplifies how designers can make an impact in very serious and scientific field of work.
So there you have it. I hope that these posts gave you a basic idea of how a data visualization process can be described from the perspective of a designer and that it encourages you to venture into realms of human activities that are still untouched by the amazing approach of design thinking and user centered design. If you have comments or questions, please drop a line!