Engineering Archives - Building Productive https://productive.io/engineering/category/engineering/ Just another Productive Sites site Wed, 19 Feb 2025 08:50:10 +0000 en-US hourly 1 Integrations Series – Data Mapping https://productive.io/engineering/integrations-series-data-mapping/ https://productive.io/engineering/integrations-series-data-mapping/#respond Wed, 27 Mar 2024 10:59:27 +0000 https://productive.io/engineering/?p=959 The post Integrations Series – Data Mapping appeared first on Building Productive.

]]>

Integrations Series – Data Mapping

Antonio Bajivić

Working in web development and having fun with electronics brings me joy. In my free time, I mix working out, socializing, and reading.

March 27, 2024

Welcome back to our integrations blog series!

In the last post, we tackled authentication and all the other steps needed to set up a connection between Productive and Xero.

Now, let’s talk about data mapping – why do we need it?

As you already know, every system stores data differently, and without proper mapping, it’s like trying to fit together puzzle pieces from different sets.

Come along as we dig into the significance of data mapping in ensuring a seamless integration!

What about data consistency?

All data from Productive must be correctly mapped in Xero when copying invoices. We utilize explicit data mapping to ensure this.

The integrations model is used to implement all the integrations available in Productive, and each of these integrations is specific, requiring the mapping of different data uniquely. For this reason, the integration model defines an options attribute in JSON format, intended to store all data specific to a particular type of integration.

After successful authorization, the user explicitly selects integration settings (such as exporting invoice numbers or payment sync) and maps data (account code mappings) between systems.

		
	

Data such as taxes and currencies are retrieved from Xero and automatically stored in the integration options field. This information will be essential for the user in subsequent integration tasks, such as exporting invoices:

		
	

User interface – Integration settings

General settings:

When the customer is successfully connected, a modal with settings opens. Here, the customer decides how to export invoices to Xero from Productive and whether they want data synchronization between Xero and Productive. When creating a new invoice, invoice numbers are generated for the customer, e.g., Invoice – 1, which can be done through Productive or Xero. Similarly, a decision is made for the system that will generate the invoice PDF in its specific way, including the invoice number. Also, an initial status is set before creating any invoice related to one particular branch, which is sent as a reference field and internal note.


Map settings:

When setting up a Xero account, there is an option to map each service type. In other words, a specific Xero account is set for each type. The Xero account is equivalent to the Productive service type. Each line item on the invoice belongs to a specific service type. When copying an invoice to Xero, we want to know to which values we will map our values. This is the next step after configuring the general settings.

In this example, we can see what the object looks like for mapping options for the entire integration:

		
	

Let’s focus on the fieldOptions and fieldOptionLabelMap. With these properties, we define how we want to map service types and how we want to display them to the customer for selection. We obtain the list of Xero accounts from the integration model on the backend, and on the frontend, we map them using these getters:

		
	

To ensure the validity of the integration update form, a default Xero account MUST be selected. This is allowed if customers have multiple service types and want to map only some of them to specific Xero accounts. This also means the default selected Xero account will be used for other service types.

Conclusion

In conclusion, integrations are crucial in enhancing productivity and efficiency in tools like Productive. The post series highlights the significance of integrating with platforms like Xero, showcasing the OAuth 2.0 protocol for secure authentication and the meticulous data mapping process. This seamless integration not only simplifies tasks such as invoicing and payment synchronization but also ensures consistency and customization in transferring data between systems with distinct business logic.

In conclusion, integrations are crucial in enhancing productivity and efficiency in tools like Productive. The post series highlights the significance of integrating with platforms like Xero, showcasing the OAuth 2.0 protocol for secure authentication and the meticulous data mapping process. This seamless integration not only simplifies tasks such as invoicing and payment synchronization but also ensures consistency and customization in transferring data between systems with distinct business logic.

Antonio Bajivić

Working in web development and having fun with electronics brings me joy. In my free time, I mix working out, socializing, and reading.
More From This Author

Related articles

Related jobs

The post Integrations Series – Data Mapping appeared first on Building Productive.

]]>
https://productive.io/engineering/integrations-series-data-mapping/feed/ 0
Integrations Series: Authentication And Connection https://productive.io/engineering/integrations-series-authentication-and-connection/ https://productive.io/engineering/integrations-series-authentication-and-connection/#respond Tue, 20 Feb 2024 08:26:33 +0000 https://productive.io/engineering/?p=893 Integrations are a big part of any software, but they’re crucial to the functionality of a tool like Productive.

The post Integrations Series: Authentication And Connection appeared first on Building Productive.

]]>

Integrations Series: Authentication And Connection

Antonio Bajivić

Working in web development and having fun with electronics brings me joy. In my free time, I mix working out, socializing, and reading.

February 20, 2024

Integrations are a big part of any software, but they’re crucial to the functionality of a tool like Productive.

That’s why we decided to dedicate a post series to discussing the importance of integrations in Productive. From enhancing functionality by connecting different tools to streamlining processes and improving user experience, integrations make life easier. We’ll showcase one of our most-used integrations, the one between Productive and Xero, which allows users to seamlessly transfer invoices and synchronize payments between the two tools.

Your agency uses Productive for managing client projects and project financials. At month’s end, you face the daunting task of issuing and sending invoices upon invoices, then replicating the process in Xero. Afterward, you also have to individually mark each payment as received.

And this is where the power of integrations kicks in. With one simple integration, invoices are synced between Xero and Productive, and any payment received in Xero is automatically recorded in Productive.

The integration between Productive and Xero looks interesting, right?

But before actually using the integration, we need to set it up first, and that’s the main focus of this blog post! We’re exploring the implementation of the OAuth 2.0 protocol.

Let’s dive right in!

First, How Do You Connect Xero and Productive?

OAuth 2.0 (Open Authorization 2.0) is an authentication protocol that is considered an industry standard.

Utilizing the OAuth protocol, Productive is granted access to the Xero account without accessing the user’s credentials.

Key features of the OAuth 2.0 protocol:

  • Granular access control
  • Authentication without sharing credentials
  • Token-based authentication (access_token & refresh_token)


OAuth2 flow

In the following steps, the authentication process is explained:

1. The user authorizes Productive, granting access to their own data on the Xero platform:


2. After successful authentication, the user is redirected back to the redirect_uri defined in the previous step with two values added as query params:

  • code → short-lived token that needs to be exchanged for access_token
  • state → serves as protection against Cross-Site Request Forgery (CSRF) attacks: if the received state attribute value does not match the value submitted in the previous step, the authentication process is terminated

Ensuring the redirection of users back to the Productive app is a crucial aspect of the OAuth flow due to the sensitivity of the information contained in the redirect_uri.
To ensure the user’s redirection to the correct location, we have securely stored the redirect URI within the Xero app.

3. Exchanging the verification code (code) for an access token (access_token):

4. Retrieving the generated tokens:

access_token is used to call the Xero API, while the refresh_token is used to refresh the access token once it has expired.

5. Retrieving and selecting the tenant whose resources are being accessed:

Each user can have multiple Xero organizations (referred to as tenants).
Xero requires that the xero_tenant_id field is sent as a header param in every HTTP request.

The following code snippet shows the retrieval of all available tenants, from which the user later selects a tenant for current integration in the Marketplace settings:

Now, Let’s Create an Integration

In Productive, as well as in many other applications, there is a list of applications called the marketplace. In the marketplace, customers choose the integration they want to use. When the integration is selected, the connection process begins by clicking the “Connect app” button. The connection flow will be demonstrated using the example of Xero.

1. Creating and Connecting An Integration

When establishing an integration, it’s generated in the system. However, to ensure its successful creation, it must align with the authentication requirements on the backend. All integrations require a redirect URI. This URI is used to redirect to Productive after connecting the integration to adjust the settings of each integration. After generating the URI, the integration creation process begins.

If the organization has branches or subsidiaries, you’ll first need to select the subsidiary (branch) for which you want to use Xero.

When creating an integration on the backend, authentication verification is performed, and if all requirements are met, the integration process begins.

During the integration connection, the customer is directed to the interface of the system they want to integrate with, where they also need to provide information to verify if the user is existing and valid. Once completed, the process returns to the marketplace via the redirect URI, initiating the setup of the integration.

2. Redirect

After the customer is redirected to Productive, the data obtained from the external tool is set on the integration model. For example, Xero requires that there is always a code and org representing the code and organization to which the integration connects and exports data in Xero.

Once the integration model has been updated, a call to the backend occurs again to update the integration with new data.

After updating the model, if there are no errors, we set the parameters of the model to transition to the integration setup route, i.e., editing the integration settings.

Conclusion

With successful authentication and integration connection, we’re finishing the first part of the integration posts series. As described in this blog post, understanding OAuth 2.0 becomes not just a necessity but a powerful tool to enhance user experience, safeguard sensitive information, and foster a more trustworthy digital ecosystem. After successful authentication, due to settings related to the external system redirect, in this case, Xero, it brings us back to Productive to continue with further integration setup.

In our next post, we’ll break down data mapping and show you why it’s a must-know for smooth integrations. Don’t miss out!

Antonio Bajivić

Working in web development and having fun with electronics brings me joy. In my free time, I mix working out, socializing, and reading.
More From This Author

Related articles

Related jobs

The post Integrations Series: Authentication And Connection appeared first on Building Productive.

]]>
https://productive.io/engineering/integrations-series-authentication-and-connection/feed/ 0
Custom Fields: Give Your Customers the Fields They Need https://productive.io/engineering/custom-fields-give-your-customers-the-fields-they-need/ https://productive.io/engineering/custom-fields-give-your-customers-the-fields-they-need/#respond Mon, 14 Nov 2022 07:01:58 +0000 https://productive.io/engineering/blog/ Here at Productive, we’re building an operating system for digital agencies. Because each agency is different, they need customization options for their workflows.

The post Custom Fields: Give Your Customers the Fields They Need appeared first on Building Productive.

]]>

Custom Fields: Give Your Customers the Fields They Need

Nikola Buhiniček

Backend Tech Lead at Productive. Pretty passionate about working on one product and taking part in decision-making. When I’m not coding, you’ll probably find me chilling with my friends and family.

November 14, 2022

Here at Productive, we’re building an operating system for digital agencies.

But, because each agency is different (think type, size, services they offer, the way they’re set up as an organization…), they need customization options for their workflows. So it’s pretty hard to model all those needs and use cases through a unified data model.

If only there were a way to let them shape those models to their own needs.

Let’s say that one of our customers, ACME digital agency wants to keep track of their employees’ nicknames and to be able to search them by that field. Other than that, they would also like to keep track of their birthdays and be able to sort them and group them by that date.

To me, as a developer, this sounds as simple as it gets—add two new columns to the people table, open those attributes to be editable over the API and send them back in the response.
But should we do that? Should we add all kinds of fields to our models even if those fields are going to be used only by a handful of our customers?

Let me show you how we tackled this type of feature request and made a pretty generic system around it.

What Did Our Customers Want?

It was pretty clear to us what our customers wanted, and that was:

  • to be able to add additional fields to some of our models (People, Projects, Tasks, …)
  • to have various data types on those fields (text, number, or date)
  • to be able to search, sort, or even group by those fields

Our Approach

The Custom Field Model

As we’re building a RESTful API that’s formatted by the JSON:API specification and store our data in a MySQL8 relational database, a few things were pretty straightforward – we need a new model and we’ll name it Custom Field (naming wasn’t an issue here 🥲).

The main attributes of that model should be:

How To Store the Field Values?

OK, so now that we know how to define custom fields, how can we know which value someone assigned to a custom field for some object? And where to store that information?

Three possible solutions came to mind:

1. Add a limited number of custom_field columns to our models

We can add a few custom_field columns to our models and that will work for some of our customers but there will always be others that need few extra fields. Adding numerous columns to our models surely isn’t the best solution, we can do better than this 😅


2. Add a join table

As mentioned before, while relying on a relational database, a join table sounds like the go-to approach. That table would be a simple join table between the custom field and a polymorphic target (yay, Rails 🥳). Other than those foreign keys, we would have a column to store the value.


3. Add a single JSON column to our models

This sounded as flexible as it gets. It would be a simple map where the key would be the custom field ID and the value would be the assigned value for that custom field.

Why We Ended Up Choosing JSON

The first solution was just too limited so we discarded that one immediately and focused on the remaining two solutions.

On one hand, a better design would be to have the custom field values represented by a model but on the other hand, we won’t actually do much with that data. That would just be data that our users set on our objects, data that isn’t important for our business logic. So a simple JSON column didn’t sound bad either.

The searching and sorting aspect of this feature request was probably the most important one for us. That was supposed to work as fast as it gets, without being a burden to our performance.

That’s why we implemented both solutions, tested a lot of searching/sorting/grouping scenarios (we’ll go through that in more detail soon), and then checked the metrics.

The faster solution was the second one, the one with the JSON column, and that made sense to us. That solution doesn’t use JOIN clauses in SQL since the values are written directly in the searched table and can be queried in the WHERE clause. Luckily for us, MySQL8 supports a bunch of great functions to work with JSON columns (JSON_EXTRACT, JSON_UNQUOTE, JSON_CONTAINS and others).

Great! Now that we know how to store the custom field values too, let’s dig into the coding.

From a development point of view, we did the following:

  • Added a new model, Custom Field, and implemented CRUD operations that can be called over the API
  • Wrote schema migrations that added a JSON column –custom_fields – to some of our models (people, projects, tasks, …)
  • Opened the custom_fields attribute so it can be edited over the API
  • Wrote a generic validation that checks if all the values in the custom_fields hash have the appropriate data type
  • Added the custom_fields attribute to the API response of the appropriate models

That was most of the work we needed to do to be able to manage custom fields in our models.

But…what about the searching and sorting aspect of custom fields?

Searching Through Custom Field Values

We already had a generic solution written for searching over the API.

We have a format of sending query params for searching, like filter[attribute][operation]=value. For searching through custom fields, we wanted to keep the same format so we ended with a quite similar one –filter[custom_fields][custom_field_id][operation]=value.

We had to add an if-else statement that would handle the custom fields filtering in a different way than filtering through other attributes as the format contained one additional argument—custom_field_id.

What was different in the filtering logic was that we have to load the custom field that’s being filtered by and check what data type its values are. That’s needed to cast the values into numbers or dates—text values don’t make a difference.

So the query params and its SQL query counterparts, based on custom field type, would look like this:

Sorting by Custom Field Values

The concept of sorting by attributes is something we also already tackled by abstracting logic.

The only thing that changes when sorting by custom fields is that we first need to cast the values and then sort by them.

Once again, there’s a small change in the format for custom fields sorters (sort=custom_fields[custom_field_id]) compared to when sorting by a standard attribute (sort=attribute). We need to handle the custom_fields sorters separately because we have to load the desired custom_field and check its type.

Then the ORDER BY statement, based on custom field types, looks like this:

Grouping by Custom Field Values

This was a fun one. The main point here was that you should include the custom fields as some kind of columns stated in the SELECT statement so that you could later use those columns in the GROUP BY statement.

To get the custom field in the SELECT statement, you have to create a virtual column for it. All we needed to do was to extract the values of the grouped custom field and give that virtual column an alias so that we could reference it in the GROUP BY statement. For the column alias we went with the format custom_fields_{custom_field_id}.

For a custom field with id=x, this is done as following:

Once we have the virtual column defined, the grouping part gets done simply, by adding the GROUP BY statement with the earlier mentioned alias.

So in the end, you get a SQL query like:

What Our Customers Got

A simple way to define Custom Fields:

And a place to assign values to their fields:

Summa Summarum

We made it possible for our customers to define custom fields in our data models. Also, we made it possible to search, sort and group by those fields.

It wasn’t long before we had even more requests that built upon our custom fields architecture. The fields we supported at first were okay, but now our customers wanted more field types. They wanted:

  • to have dropdown custom fields
  • to have relational custom fields
  • a field where the values would be objects from one of our existing data models

But before we dig into that, let’s give some time for this basics to sink in. I’ll be back soon with another blog post in which I cover how we solved that new set of feature requests.

Nikola Buhiniček

Backend Tech Lead at Productive. Pretty passionate about working on one product and taking part in decision-making. When I’m not coding, you’ll probably find me chilling with my friends and family.
More From This Author

Related articles

Related jobs

The post Custom Fields: Give Your Customers the Fields They Need appeared first on Building Productive.

]]>
https://productive.io/engineering/custom-fields-give-your-customers-the-fields-they-need/feed/ 0
Learning Ember: The Easier Way https://productive.io/engineering/learning-ember-easier-way/ https://productive.io/engineering/learning-ember-easier-way/#respond Thu, 10 Nov 2022 16:37:09 +0000 https://productive.io/engineering/blog/ Instead of working in Vue or React, I had to learn Ember for my new frontend position at Productive.

The post Learning Ember: The Easier Way appeared first on Building Productive.

]]>

Learning Ember: The Easier Way

Davor Tvorić

Frontend Engineer at Productive. Excited about anything related to computer science. I spend most of my personal time reading and playing video games.

November 10, 2022

A couple of months ago, there was a huge shift in my work-life. Or so I thought.

Instead of working in Vue or React, I had to learn Ember for my new frontend position at Productive. I was sure I would have needed quite some time to get used to it. Up until a few months back, I didn’t know what was going on in the Ember ecosystem. Although this didn’t scare me, I felt like it would be a hefty challenge because I wasn’t sure what I was getting myself into.

Turns out, it’s really not that different from the frameworks I already know. All of the usual things you’d expect are there. Things like store management, component slots, dependency injection and much more. And they were a bit easier to use since it all came out of the box! 

You don’t have to spend a lot of time deciding between libraries, patterns or technologies because a number of them are already there when you just install Ember. It does have some nuances, pitfalls and you still have to choose between some libraries (UI libraries, I’m looking at you), but I haven’t worked with a framework where you didn’t have to worry about anything. After I took all this in, the hefty challenge didn’t seem so bad anymore. Now, this was just a matter of getting used to the framework.

As I’ve started to learn about Ember on a deeper level, a lot of the concepts were familiar to me. Some were described exactly as you’d expected them, some were named differently and some used a different terminology.

But since Ember has such a long history (for a Javascript framework, at least), there are some terms I wasn’t aware of. That’s why I started to write down anything that I wanted to learn more about. This included libraries, phrases, patterns and technologies. I was sure this would help me in the long run, so I’m sharing it with anyone who’s just starting out with Ember.

This is especially helpful if the codebase you’re working on has a couple of years under its belt. It pays off knowing how and why things were done so you don’t accidentally break a functionality when refactoring.

I’ve talked long enough, so here’s the list!

Embroider

A modern, full-featured build system

Some build features it is supposed to provide:

Reduced build and reload times

Tree shaking for Ember related modules, components, etc.

Support for arbitrary code splitting

You can opt-in to use it, but will be used in the future

Learn more about it:

Glimmer

DOM rendering engine

Architected like a virtual machine (uses Glimmer VM)

Can coexist with classic components

Builds a “live” DOM from Handlebars templates and cheaply updates after data changes

Learn more about it:

Classic Components

Older-style components that do not use native classes

Glimmer components are preferred nowadays

Learn more about it:

Handlebars

A templating language not specific to Ember

Used in Ember to define component templates

A superset of Mustache templates

Adds some functionalities to make writing templates easier

Learn more about it:

Mustache

Can be used to template anything, not just HTML

Called logic-less because it has no if statements, else clauses or for loops

Learn more about it:

Broccoli

An asset pipeline

Used for converting ES6 to ES5, SCSS to CSS, etc.

Supports constant-time rebuilds

Came as a replacement for Grunt

Learn more about it:

“Data down, actions up”

Represents a unidirectional flow of data

Passing data to a child component or subroute

The child component receives the actions that modify the given data

Helps with the separation of concerns

Avoids complex data loops

Not specifically related to Ember, but mentioned in the docs a lot

Learn more about it:

Ember Helpers

Javascript functions that can be called from the template

Ember offers some helpers out of the box, like:

Let

Get

Concat

Learn more about it:

Ember Engines

A specific type of Ember addon

Allows multiple logical applications to be composed into a single application from the user’s perspective

Requires a host application since they don’t boot themselves

Helpful when trying to separate the different areas of a single application

Can be used in the host repository or in a entirely different repository

Learn more about it:

Ember Test Helpers

DOM and other testing helpers that are not automatically included when Ember is installed

Learn more about it:

Ember Fastboot

Ember SSR

Does not require codebase changes for it to work

Learn more about it:

Ember Modifiers

A way to interact with the DOM in Ember (instead of manual interaction)

Some modifiers come out of the box, but you can also write custom ones

Learn more about it:

Autotracking

Ember’s reactivity model

Decides what to render and when

Learn more about it:

Qunit

A testing framework

It was used in jQuery, but was extracted as a separate project

Not specific to Ember, but is used in Ember by default

Learn more about it:

Ember CLI

The official way to create, build, test and serve Ember projects

Other frameworks have their own versions, as well

Learn more about it:

Octane

The current edition of Ember released in December 2019

Introduced a lot of new concepts and newer ways of developing

Glimmer components

Modifiers

Learn more about it:

Polaris

Ember’s next edition

Will introduce more new concepts and functionalities

Learn more about it:

I’m sure some things might be missing, but this is what was the most puzzling to me. Hopefully, you’ll find some use of all these terms!

Also, I feel like it would be kind of rude not to mention the resources I used while learning Ember: (yay, more lists)!

Davor Tvorić

Frontend Engineer at Productive. Excited about anything related to computer science. I spend most of my personal time reading and playing video games.
More From This Author

Related articles

Related jobs

The post Learning Ember: The Easier Way appeared first on Building Productive.

]]>
https://productive.io/engineering/learning-ember-easier-way/feed/ 0
Testing the Test https://productive.io/engineering/testing-the-test/ https://productive.io/engineering/testing-the-test/#respond Wed, 09 Nov 2022 16:14:35 +0000 https://productive.io/engineering/blog/ A lot of developers avoid writing automated code tests, or at least writing proper ones.

The post Testing the Test appeared first on Building Productive.

]]>

Testing the Test

Ivan Lučin

VP of Engineering @ Productive. Frontend engineer under the hood. Outside of working hours—a happy husband, dad of two girls and a wannabe musician.

November 9, 2022

Code in automated tests should also be tested… right?

A lot of developers avoid writing automated code tests, or at least writing proper ones. Having a basic rendering test for a component or a class, generated by your framework, doesn’t count as testing.

This is completely reasonable. Writing tests screws you up on so many levels:

  • it lowers your confidence, showing you how bad your code actually is
  • it drags your delivery date, which you so-badly underestimated
  • it triggers your OCD because it’s impossible to achieve 100% test coverage
  • it generates more work, because you end up with a lot more code to maintain

So it isn’t surprising that only experienced devs do it properly. You need to feel the struggle to actually understand its benefits. And, you need to learn how to test the stuff that will bite your ass in the future.

There are different types of tests: unit, integration, acceptance, e2e, smoke, visual regression tests, etc. Every type of test introduces a new set of problems and requires a different perspective on the code being tested.

The biggest trap when writing tests is that you actually never know if the test is correct. You’re writing code that tests other code, which means you have even more opportunities to make mistakes.

So you need a system to test the test, right? I don’t actually have a solution for this, only a few pieces of advice to give.

Minimize the Logic

The code in the test should be as trivial as possible. No if statements, no for loops, no logic—this is forbidden. Minimize the complexity wherever you can.

The test should also function as documentation, which means it should be readable by a human. Your Product Manager or QA engineer should be able to understand it.

Having a good testing framework is important. At Productive, we’ve developed our own set of abstractions over ember-test-helpers library, which is provided by Ember.

Common Structure

Most of the code in unit/integration tests should look the same.

A typical test consists of:

  • setup: the part where you setup data mocks, stubs and the test environment
  • construction: render the component or instantiate a unit you’re testing
  • assertions: where you make sure that the output of the component/unit is correct
  • interactions (+ more assertions): where you interact with the component (clickin’ them buttons) or with the object (callin’ them methods)

Typical structure of a component rendering test

Being rigid about this structure is crucial. This will help you keep your tests tidy and readable. It will minimise the possibility of making mistakes, but it will not prevent you from falling into “the biggest trap”.

Testing the Test

A great way to test the test is to change the original code the test is testing and then seeing if the tests are failing as they should. It sounds trivial, but I’m pretty sure you’re not doing it that often (at least I’m not 🙂).

As soon as you see the green light in the tests, you feel happy and move on to the next thing— because you’re probably in a rush. And what if you tested the wrong thing?

Let’s take a look at the following example:

Pretty basic. If there are no items, the “items-count” label should not render. We’re testing the {{if}} statement in the template. You’ll see a green light in your CI/CD pipeline and move on happily, right?

Not so fast. Take a closer look at the test: the CSS selector is invalid. We’re missing the the dot in the $(’items-count’) call. So it’s completely wrong, but the test is still passing. 🤯 This is a common pitfall.

Whenever you write an assertion, make sure it’s failing before it passes. You can do that by commenting out (or adjusting) the code responsible for the logic you’re testing.

In this example, you would need to remove the {{#if @items.length}} statement in the template and check out if the test is failing. You would notice that the test isn’t failing, which would indicate that you wrote an invalid test.

This is how you test the test.

Mutation Testing

The idea of changing the codebase to validate how well the tests are written is not new. It’s called Mutation testing and there are testing libraries that do that automatically. A good mutation library should be able to handle the problem from the example above.

If you’re interested, check out this article on Javascript Mutation Testing from Olle Lauri Boström. He’s using Stryker for mutating his Javascript tests. To avoid diving into more detail, I’m just gonna drop a quick quote from the article:

— Simon de Lang

The only way to know that a test actually works is when it fails when you make a code change.

TDD

You can also try the TDD approach. First write a failing test followed by the code that makes the test pass. This would work well in our example because the test would be passing from the start, which isn’t allowed by TDD.

TDD is a somewhat holistic and naive approach. It doesn’t always work out in practice, at least not in the “Frontend land”. But the idea is great, so take all the good parts from it.

How Good Are My Tests?

Our recently joined colleague, Jean Petrić, wrote an excellent academic paper on a similar topic. With a few of his fellow researchers, he tried to answer: “How Good Are My Tests?”

They made a list of 15 testing principles that capture the essence of testing goals and best practices from a quality perspective. Go on and read the article, it’s a gem!

That’s all folks. Happy testin’ 👋

Ivan Lučin

VP of Engineering @ Productive. Frontend engineer under the hood. Outside of working hours—a happy husband, dad of two girls and a wannabe musician.
More From This Author

Related articles

Related jobs

The post Testing the Test appeared first on Building Productive.

]]>
https://productive.io/engineering/testing-the-test/feed/ 0
Pull Requests—The Good, the Bad and Really, Not That Ugly https://productive.io/engineering/pull-requests-the-good-the-bad-and-really-not-that-ugly/ https://productive.io/engineering/pull-requests-the-good-the-bad-and-really-not-that-ugly/#respond Wed, 09 Nov 2022 08:34:20 +0000 https://productive.io/engineering/blog/ PR review workflow helps with code quality, sharing domain knowledge and mentoring. But every process comes with overhead.

The post Pull Requests—The Good, the Bad and Really, Not That Ugly appeared first on Building Productive.

]]>

Pull Requests—The Good, the Bad and Really, Not That Ugly

Ivan Lučin

VP of Engineering @ Productive. Frontend engineer under the hood. Outside of working hours—a happy husband, dad of two girls and a wannabe musician.

November 9, 2022

Recently, I stumbled upon an article on arkencyDisadvantages of pull requests.

I soon realized this was interesting stuff, worth sharing with my colleagues. Immediately I started writing a follow-up article, debating every claim made in the text.

We practice PR review workflow in Productive’s engineering team daily. This workflow helps us with:

  • Ensuring code quality and stability
  • Domain knowledge sharing in the team
  • Organized mentoring activities

Productive’s Engineering team builds one SaaS product and works on two massive codebases—the backend (API) and the frontend (web app).

Every process comes with overhead, and so does this one, but I’ll argue it’s worth it.

In this blog post, I’ll quote key points from the mentioned article and give my thoughts on the matter.

Now, let’s discuss the disadvantages of pull requests!

1. More Long-Living Branches, More Merge Conflicts

PRs promote developing code in branches, which increases the time and the amount of code staying in a divergent state, which increases chances of merge conflicts. And merge conflicts can be terrible, especially if the branch waited for a long time.

Yes, this is true. But it shouldn’t be that big of an issue if you practice a good distribution of work in your team.

Most products consist of many modules which are usually different screens or sections in the app. You should always divide work across those modules, in a way that one PR doesn’t interact with the other.

If your codebase is not that modularized, you can ensure a few good practices to avoid this issue:

  • Always split a big chunk of work into smaller, but functional PRs.
  • Rebase the branch daily and update it with new changes from the master/develop (I usually do this before every code review).
  • Always separate preparatory refactorings and merge them before implementing the actual feature changes. This is great advice given in the original article!

The most common mistake developers do is putting too much work into one PR, making both the review and merge process harder than it should be.

2. The Reviewability of a Change Decreases With Size

PRs tend to promote reviewing bigger chunks of code.

More often than not, developers fall into this trap. They think something like “I’ll just refactor this while I’m already here” or “I need to complete the whole thing before sending it for review”. This leads to oversized PRs and a slow development process.

3 PRs with 100 lines are reviewed and merged faster than 1 PR of 300 lines.

Of course, not all changes are the same. If we have 100 lines of boilerplate code, which is usually framework scaffolding, there’s no complexity there. Tests take a lot of lines but should be easy to read. Code with fewer dependencies will always be easier to grasp too.

Basically, the more a developer is experienced and familiar with the codebase, the more lines he/she is allowed to put in one PR. Less experienced developers on the project should strive to make their PRs as thin as possible.

3. Short Feedback Loop Makes Programming Fun

You code something up but it’s nowhere close to being integrated and working. You now have to wait for the reviewer, go through his remarks, discuss them, change the code…

I couldn’t agree more. There’s nothing more exciting in programming than getting a great idea, implementing it fast, and testing it out instantly.

The original author’s claim is that PR reviews kill that short feedback loop which makes programming less fun. But if that’s true, then it also means that:

  • Your team is producing large and overcomplicated pull requests.
  • You don’t have a good deployment and review environment setup.
  • You’re not coding features in safe-enough isolation.

Again, it all revolves around making small and easily reviewable PRs, but we’ve already acknowledged that, so let’s move on.

If you do have a large PR waiting review, that doesn’t mean you can’t try it out in the wild. You could easily deploy the code to a staging environment, or even production, but on a separate domain so only you and your team can try out the new changes.

At Productive, for our main frontend app, we have an automated PR deployment setup. When you create a new PR on Github or push commits to an existing PR, a Github action is triggered that deploys the code to the “review” environment.

It takes the name of the git branch and deploys the frontend app to the subdomain with the same name. For example https://cool-new-stuff.review.productive.io.

Voilà, now you can try your PR changes in the live environment two minutes after you’ve created the PR.

Plus, there’s another Github action that posts a comment with a test link, so you don’t need to type it into the browser’s URL bar manually:

This is a frontend app example hosted as a static asset on a CDN, but you could do it similarly with backend apps.

There’s another mechanism for achieving a short feedback loop which is safe if done properly. With feature flags, you can ensure that your PR is not introducing any breaking changes and that new code is implemented in isolation from existing features.

Feature flags allow you to try the new changes in an isolated scope, without changing anything in the rest of the system. The scope can be either one user, one account, or just one API request. It’s a powerful mechanism that our team uses on a daily basis. It allows us to push new changes to the live environment rapidly, without worrying about breaking things.

4. Reviews Tend To Be Superficial

Proper review takes the same amount of focus as actual coding. Why not pair-program instead?

I agree that PR reviews tend to be superficial, especially when there’s too much to review. That only means that the reviewer is doing their job poorly. You can recognize this situation when you see a complex PR that only has code-style comments like “Fix naming” or “Wrong indentation”, etc.

If the PR seems like it’s too complicated—why not pair-review instead?

Pair programming (not to be confused with pair-reviewing) is useful for mentoring junior developers, but it’s a waste of time in a lot of other cases. That’s because you’re not solving complex problems all the time.

A better way to do it is to pair-review the PR with the author after most of the boilerplate is done and proper research has been made. Then you only talk about the complex stuff and you can leave the author of the PR to explain their thought process in detail.

There’s one more thing here… Programmers usually get this “feeling” when something stinks about their solution. They should be honest and leave comments in the PR, in every place where they feel they could be doing something wrong. That greatly increases the reviewability of the PR.

5. Merging Is Blocked by Remarks That Shouldn’t Be Blocking

Remarks made by the reviewer can fall anywhere on the spectrum of whether they should block merging or not: from a mistake that’ll bring production down to cosmetic suggestions or opinions.

These situations happen quite often with inexperienced reviewers. When commenting on the PR, you should tell the author if the comment is a merge-blocker or not, and never insist on details too much.

When I’m working with inexperienced developers, I’ll usually tell them exactly how to fix something, with an explanation. For non-blocking mistakes, I’ll just comment to watch out for that in the next PR. Usually, I’m the one to merge the PR, so they don’t have to wonder whether the PR is finished or not.

When reviewing experienced dev’s PR, I’ll tell them what I think about the problematic places in the code, but I’ll mostly approve the changes and leave them to the author to do the fixes the way they want to.

Not everything the reviewer says is always correct or the right thing to do. The author should be the one to decide on the comments—because it’s their work.

6. It’s Easier To Fix Than To Explain the Fix

The original author has to understand it first, agree with it, and then is expected to implement it. Often it’s better to let the original author merge his thing, and let the reviewer implement his remark post-factum?

A lot of times it’s hard to explain the fix to the author and it would be more efficient just to implement the fix yourself. But merging the unfinished PR could be dangerous, so I’d never suggest that. You’d also be missing the opportunity to educate the original author about the problem.

Pull requests don’t have to be done by only one person. Why wouldn’t you, the reviewer, just pull the branch, implement, and push the fixes back upstream? That way the author gets notified about what you did and you both have an opportunity to discuss the issue.

That’s also an effective thing to do when there are a few simple fixes required on the PR. Let’s say you notice a typo somewhere in the PR. It’s much faster to fix it yourself than to leave a comment and wait for the author to do the fix. You could do that directly in Github and it would take like 30 seconds.

7. Developers Are Slower To Adapt the Responsibility Mindset

The second one knows that every line they write can screw up things for other developers or even bring production down. They watch their step, they know they are the only one responsible for this change. It shortens the delay between making a mistake and seeing the effect of it.

This is an interesting and important claim. Can your business afford to tear the live server down just to teach the developers some responsibility? Sometimes it’s worth it, but in most cases, you wouldn’t want to do that.

Code stability is not the only reason why we should be reviewing our PRs. There are also benefits of mentoring, knowledge sharing, better release organization, feature completeness, structured discussion around a diff, etc.

At Productive, we encourage people to self-merge PRs without a review when they’re confident that the changes are fine. The developer should have the freedom to do that in the part of the codebase where they have more ownership. I believe this is how the responsibility mindset can be trained. When people build something from the ground up, they will feel the responsibility to keep everything in great shape.

8. PRs Discourage Continuous Refactoring

It’s good to follow boy scouts’ rule: always leave the place better than it was before. With PRs, though, this rule is harder to apply.

PRs will slow you down on refactoring. When you realize that some refactoring is needed, you need to switch to another branch to implement the refactoring separately, submit the PR to review, wait for the approval and merge the changes back into the original branch.

But it’s still better to do it that way, in my opinion. Refactoring PRs shouldn’t be complicated to review since there shouldn’t be any changes in the logic, only the code structure. That’s especially true if you have good test coverage. If you don’t, then maybe you should write the tests first?

Maybe the refactoring PR doesn’t need to go through the review process if everything is trivial. If it does, then mention to the reviewer that this is a refactoring-only PR that’s blocking feature development and the reviewer should prioritize the approval.

Having good team communication around PRs will minimize most of the downsides of the PR review process.

9. Negative Emotions

Mandatory PR reviews can induce way more negative emotions than needed. We all have limited emotional budgets — it’s better not to waste it on avoidable stuff.

When it comes to pull request reviewing, the responsibility is both the author’s and reviewer’s. There’s one simple rule to follow:

Don’t be a prick when reviewing or authoring the PR.

The original article is basically saying that we should let developers do what they want because they might get offended by the review comments.

Obviously, that’s ridiculous. This doesn’t have anything to do with PR reviewing itself—it’s a communication problem. If your engineering team doesn’t know how to communicate, then you’ll have bigger issues than developers offended by a PR review.

10. How Do You Switch to Branches With Migrations

You obviously sometimes need migrations while working on a branch. What do you do if you then have to switch back to another branch locally? Cumbersome.

Well, just do the migration separately on the main branch, before implementing the remaining logic in the PR. The world won’t collapse if you don’t always work on branches with PR reviews. You can always omit the process if you have a good reason for it.

Conclusion

The original article contains a list of suggestions on how to improve your team’s PR workflow. Those are all great ideas, which I encourage you to try. Thanks to arkency for writing a great and educational article! We’ve also published a copy of this article on Medium.

As with other organizational processes, we shouldn’t take them too seriously because they tend to generate overhead and slow work down. We should be pragmatic about PR reviews and when we notice they’re becoming a burden—we can skip them sometimes.

Don’t follow the rules blindly, try thinking with your head and do the right thing depending on the situation you’re in.

And a little remark for the end—let’s all be humble and respectful towards each other while reviewing each other’s code!

Ivan Lučin

VP of Engineering @ Productive. Frontend engineer under the hood. Outside of working hours—a happy husband, dad of two girls and a wannabe musician.
More From This Author

Related articles

Related jobs

The post Pull Requests—The Good, the Bad and Really, Not That Ugly appeared first on Building Productive.

]]>
https://productive.io/engineering/pull-requests-the-good-the-bad-and-really-not-that-ugly/feed/ 0
How React Ruined Web Development https://productive.io/engineering/how-react-ruined-web-development/ https://productive.io/engineering/how-react-ruined-web-development/#respond Tue, 08 Nov 2022 15:18:18 +0000 https://productive.io/engineering/blog/ Nobody agrees with the title of this article, saying “ruined” is too strong of a word. But most of them agree with the problems discussed in this article.

The post How React Ruined Web Development appeared first on Building Productive.

]]>

How React Ruined Web Development

Ivan Lučin

VP of Engineering @ Productive. Frontend engineer under the hood. Outside of working hours—a happy husband, dad of two girls and a wannabe musician.

November 8, 2022

Last week I attended .debug, a developers conference, where my company held a booth.

The idea was to have a “change my mind” kind of setup, where we represent a radical idea, invite people to debate with us, and show them that we’re building some interesting stuff at Productive.

We decided to go with this one:

My first opponent was this young lad on the right, who builds apps with React native.

Jokes aside, React is a fine library. It’s important in web development because it introduced declarative and reactive templates, a paradigm shift that everyone needed at the time. There was a problem with rendering engines and reactivity back then (6 or 7 years ago) and React solved it pretty well.

As a side note, Ember solved the same problem earlier. It wasn’t as performant, though, and the framework was too opinionated to catch up with the way React had done it.

useEffect(makeMess)

What happened after React gained popularity was a mess. It started a new trend in the community where everything revolves around hype, novelty, and creating new paradigm shifts. Every few months there were new libraries emerging, setting new standards of how we should write React web apps, yet solving problems that were, for the most part — already solved.

Let’s take “state management” as an example. Since React is missing a traditional dependency injection system (DI is achieved through component composition), the community had to solve this problem on its own. And it did. Over and over and again. Each new year brought a new set of standards.

React State Management’s motto — “New year, new me!”

React is just a rendering engine, and in a typical web app, you need many libraries to build a framework for a project — e.g. data layers, state management, routing, asset bundlers, and more.

The ecosystem behind React gave you too many choices of this sort, which fragmented the tech stack and caused the infamous “Javascript fatigue”.

One of the trends that also emerged was “framework comparison obsession”. JS frameworks were constantly compared with properties like rendering speed and memory footprint. This is irrelevant most of the time because a slow app is not caused by a slow JS framework, it’s caused by bad code.

The line for discussion getting longer and longer…

As with every trend that is taking over the world — this one went too far, damaging new generations of web developers. I’m wondering how it’s possible for a library to be the most relevant skill on an average web developer’s CV? Even worse, it’s not even a library but a module inside that library. React hooks are more often mentioned as a “skill” as opposed to some actual skills like code refactoring or code review.

Seriously?! When did we stop bragging about the important stuff?

Why don’t you tell me, for example, that you know:

How to make simple and readable code

… not by mentioning the most starred library on Github, but by showing me one or two of your finest snippets.

How to manage state

… not by mentioning a popular state management library (preferably ending with “X”), but by telling me why “data should go down and actions should go up”. Or why state should be modified where it was created and not deeper in the component hierarchy.

How to test your code

… not by telling me that you know Jest or QUnit, but by explaining why it’s hard to automate end-to-end tests and why minimal meaningful rendering tests are 10% the effort and 90% the benefit.

How to release your code

… not by mentioning that you use CI/CD (as every other project today that has more than one person working on it), but by explaining that deployment and release should be separate so you should code new stuff in a way that doesn’t mess with the old stuff and can be turned on remotely.

How to write reviewable code

… not by mentioning that you’re a “team player”, but by telling me that code review is just as hard on the reviewer’s side and that you know how to optimize your PRs for readability and clarity.

How to build solid project standards

… because unless you’re a one-man band, you’ll hate your life if you don’t follow strict standards and conventions in a project. You should tell me that naming is hard and the broader the scope of the variable, the more time you should invest in coming up with a good name for it.

How to review other people’s code

… because code review ensures product quality, reduces bugs and technical debt, builds common team knowledge, and more — but only if done thoroughly. Code review shouldn’t only be done top-down. It’s a great learning mechanism for less experienced team members.

How to find your way in any JS framework

… because it’s not about the GitHub stars, it’s about common principles that most of today’s JS frameworks share. Finding out about the pros and cons of other frameworks makes you understand your framework of choice better.

How to build MVPs

… because technology is only a tool for making products, not the process. Spending time on optimizing the process is always better than spending time on arguing about technology.

How to optimize: not too early, not too late

… because most of the time, optimization isn’t necessary at all.

How to pair-program

… because pair-programming is, like code review, the most important practice for knowledge sharing and building team cohesion. It’s also fun!

How to continuously refactor

… because every project has technical debt and you should stop whining about it and start refactoring. Every new feature should be preceded by minor code refactoring. Big refactoring or rewrites never turn out well.

So yeah, that’s why I think React ruined web development. People at the conference were intrigued by the claim and joined the debate eagerly. I had a great conversation with a few experienced React developers. Nobody agrees with the title of this article, saying “ruined” is too strong of a word. But most of them agree with the problems discussed in this article.

You can try to convince me that React isn’t that bad, and I will absolutely agree with you! 😄

But instead, let’s debate about the more important topics — the work that we actually do as software engineers.

Ivan Lučin

VP of Engineering @ Productive. Frontend engineer under the hood. Outside of working hours—a happy husband, dad of two girls and a wannabe musician.
More From This Author

Related articles

Related jobs

The post How React Ruined Web Development appeared first on Building Productive.

]]>
https://productive.io/engineering/how-react-ruined-web-development/feed/ 0