A little chat with: Matteo Papi, Front-end developer

Author:
Kode s.r.l.
Date:
31.10.2023
Topic:
Interview

The world of data science leverages skills from the IT environment that are widely used and known in their application in other fields and are changing face in our industry. But if a well-structured data visualization is the key to making data analysis concretely useful and meaningful, how many complexities does a front-end developer have to deal with in the world of Big Data? We discuss this today with Matteo Papi, a front-end developer in Kode. 

Matteo Papi
Matteo Papi

Matteo, what does your work as a front-end developer consist of?

The front-end developer is someone who tends to have an IT background. He is involved in the client-side development of applications with which a user can interact. We are talking about websites, web management software, mobile applications or dashboards. Basically, the front-end developer is responsible for making an application easy to use and usable on all devices (browsers and operating systems).

Here at Kode, I focus on developing applications to best represent knowledge extracted from big data. In this field, data visualization tools and technologies are indispensable for analysing huge amounts of information and making data-driven decisions.

The main goal is to make the extracted information immediate and tangible, even to a non-technical audience. This is fundamental especially in the business environment, where we tackle with managers or C-Level.
A data visualization dashboard provides decision-makers with a high-level view of their most important metrics. It combines numbers, charts, graphs, tables and other graphic elements designed to focus an audience on the metrics that matter. They can be helpful for spotting problems early, making decisions about where to allocate resources, and measuring progress over time. By surfacing data using appropriate, easy-to-digest visualizations, the dashboard provides key information at a glance. This means providing visualizations that do not require extensive interactivity.

Data Analysis is an amazing branch of data science, but it risks being an end in itself without visualization tools that can extract insights from data.

What I’ve said so far may lead one to think that it requires a data scientist background for this kind of job: this is partly true even though my skills are more oriented toward the world of software development.

At this point, what was the path that led you here?

I followed a fairly well-rounded path: doing science-focused high school only seems to make you ready to continue your studies at university, and in fact this is only true if you can later find subjects that you are truly passionate about.

After a first approach to Telecommunications Engineering, I completed my course of study in Humanistic Informatics, where I discovered my passion for both computer development (web-oriented) and data. During my university career, I delved into all aspects involved in the development of web-based solutions that in some way enabled the use and visualization of data.

However, they were theoretical notions and in some cases far removed from a real-world context. I then faced this reality with my first work experience in a web agency based in Lucca. There I deepened and verticalized my skills on programming languages such as JS and PHP. I developed sites and web management software with open source CMS (such as WordPress, Typo3, Magento), mobile applications with ad hoc frameworks (like Ionic).

In that occasion I deepened my skills with the main databases (MySQL, PostgreSQL and MongoDB). In the end it was a type of job extremely mechanical and in some ways repetitive in many steps. Indeed the development pipeline was almost always the same for many of the projects I tackled. During these years, I tried not to leave behind my passion for the world of data, developing solutions linked to a tool such as Google Analytics software that, if configured correctly, is able to collect traffic data generated by a website.

In 2018 I joined Kode where, right from the start, I tackled a number of issues that had already intrigued and fascinated me in my university days. First of all the management and visualisation of Big Data. What makes this type of work truly fascinating is how varied is the nature and source of data: industrial plants, IoT sensors, wearable devices, logistics data. 

Even though I already had a solid expertise in web development, I had to verticalize my skills. I faced new frameworks (React, React Native) and open source software (NodeJS, Docker, Jenkins) useful to develop solutions that allow to visualize any kind of data in the best possible way.

In 2023, it may seem unbelievable, but some companies do not know how much and what kind of data they are able to collect. For these reasons, well-done analysis and visualization tools are not only useful but fundamental.

You mentioned the verticality of skills required, what makes front-end development for Data Science different?

As I have already mentioned, the world of front-end development is very wide and varied. Working methodology, tools and frameworks change mainly according to the product in development. A mobile application is very different from a website, which in turn is very different from a web management system.

In data science, very often clients ask to carry out data explorations with the purpose of understanding what to develop. Although, after understanding the customer’s needs, the first step to take obviously involves data-related aspects: volume, type and the data source in which they are stored. Business information are generally available exclusively in the client’s network, which can be reached via VPN (Virtual Private Network). The development of one of our solutions must necessarily take this fundamental aspect into account.

Once the “nature” of the data has been established, the development of one of our applications needs to take into account the costly processing required to retrieve and process the information. It includes optimal management of the application status and of calls to back-end services. Both are fundamental aspects for making a solution usable.

In addition to that, the releases of the different services that make up a more or less complex application must take into account the fact that, often, the final deployment must be on the customer’s server. This means agreeing in advance on the machine’s hardware resources on which the application will be installed.

Before the final release, we shall thoroughly test the service to prevent bugs, reduce development costs and improve software performance.

Just to close the topic regarding front-end development, in Kode we build and design in-house (Princess Framework) both the graphics as well as the modules and components that form the basis of the UI. We do that with the goal of creating recognizable, scalable solutions with a strong visual identity.

You mentioned the verticality of skills required, what makes front-end development for Data Science different?

Before representing them in a graph or a table, we shall ‘retrieve’ data from a storage software. It doesn’t matter if it is a relational or non-relational database, a graph or document database, etcetera.

We usually implement operations (such as retrieving and manipulating data, loading and executing an AI model) within a back-end service, a module that interposes itself between the interface and the datasource.

I will try to simplify as much as possible the flow of operations, which is conceptually simple. In the client-side, a user performs an action that generates a call to a back-end service. This back-end service retrieves and processes the data to return the information back to the client. It is a circular process that originates and ends in an interface and is carried out by calls based on the HTTP protocol passing data in JSON format.

The manipulation of large amounts of data always takes place at the back-end level. This is generally preferred in order to perform complex server-side operations while avoiding overloading the client.

We can do some processing on the front end side as well: sorting, filtering and mapping data. We can also perform statistical functions (such as averages, sums, etc.), small, computationally unburdensome operations.

For these reasons, in my conception of software development, it’s always better to distinguish front-end services from back-end services in a clear-cut manner, right from the prototyping of the solution. The two entities shall never overlap in order to make the development of each service more precise and specific.

Some aspects of this work seem to be generalisable for every project, what makes this work less repetitive than others?

The aspect that fascinates me most about working at Kode is the diversity of the projects on a daily basis.  You can support a customer retrieving signs encountered from a video of a road, to build a road signs registry. Another project aims at implementing a predictive maintenance system on its plant. You can optimise last-mile logistics. Or support a customer who wants to retrieve and visualise information from wearable devices in real-time…

Each project has different challenges that we must solve using the tools available. With this in mind, the methodology and approach to development become critical. This coice allow to offer modular solutions that can cut development, update and maintenance times.

This work approach characterises the entire team of Kode developers. The shared goal is to create tools and processes reducing repetition by standardising our solutions as much as possible. IAs a consequence we can concentrate our efforts on custom developments.

For years we have been working on the development of a proprietary framework (Princess, as mentioned above). Like a toolbox, it allows us to implement solutions by optimising the release times of the various services.

On the front-end side, the Princess UI Toolkit we are developing consists of a series of modules to visualise data (charts, tables, maps) and a series of services to manipulate it (groupings, filters, statistical functions). We can build more or less complex application pages with these kinds of objects.

The ultimate goal is to develop a dynamic UI that our data scientists can implement and use without writing lines of code.

The toolkit grows in relation to the projects that come into Kode. Potentially each project can bring new features that we add or integrat to those that already exist.

Then I could say much more about Princess, but maybe it’s better if we devote a separate chat to it.

Contact form

Thank you for your message