How These 6 Engineering Professionals Build With Scalability in Mind

We caught up with six local tech professionals to learn how and why they build technology with scalability in mind. 

Written by Janey Zitomer
Published on Jul. 30, 2020
How These 6 Engineering Professionals Build With Scalability in Mind
Brand Studio Logo

A ferris wheel works the same way whether or not every seat is taken. But consider the operating costs involved when only one passenger decides to ride. The notion is similar to how Artifact Uprising Senior Software Engineer Austin Mueller thinks about website scalability. 

“Our goal is not to waste resources if only two people are using our site and not to let our customer experience suffer if two million people are using our site,” Mueller said.  

He and the following five Colorado tech professionals have made it their mission to build with scalability in mind. They’ve done so using microservices architecture and software systems like Kubernetes and Amazon Web Services. 

But all of the tools and techniques in the world won’t fix an infrastructure with innate issues. The process must start with a functionality intake and end with consistent reevaluation. 

“We use tests and metrics to guide our decisions and try to stay away from fingers in the wind,” Automox Director of Software Engineering Brad Smith said. 

Below, Smith, along with CTOs, senior software developers and site reliability engineers outline how they do just that.  

 

Maxwell Financial Labs
Maxwell Financial Labs
Brad Smith
Director of Software Engineering • Automox

When defining system scalability requirements, Brad Smith’s team at Automox always begins with a design session. The director of software engineering said that he thinks about scalability not only as an application or system but as it relates to an entire organization. Smith keeps his organization ahead of the curve by asking engineers to use tests and metrics to guide their decisions and never trying to overcompensate for performance and scale.

 

In your own words, describe what scalability means to you. 

Scalability can be a loaded word. I think most people refer to it as an application or system that increases performance proportional to the services shouldering that load. But you should think beyond scaling web services; an organization needs to be able to scale as well. If a service is deemed scalable, we need the people and processes to keep that service ahead of the curve. 

Here at Automox, scalability is an important part of how we build our infrastructure and organizations and deploy our applications. 

Customers depend on us to make sure their endpoints are patched with the latest fixes. We have to be able to meet their demand. Every time Microsoft has a big Patch Tuesday, our system must be able to respond to the added load without letting our customers down. Most of the time we meet the challenge. But there are times where we fall short. Failure is OK as long as you learn from it and make the system scale further next time.

 

How do you build this tech with scalability in mind? 

When defining system scalability requirements, we always start with a design session. We get together as a group and brainstorm. The team has an opportunity to discuss potential issues or pitfalls and define what success looks like. Each stakeholder is represented at the table. 

There is a natural tendency to make applications and systems bullet-proof, but it’s better to make incremental changes, release them, measure them and make data-driven decisions going forward. We use tests and metrics to guide our decisions and try to stay away from fingers in the wind. 

In a startup, the trade-off between performance and scalability is paramount. Never try to overcompensate for performance and scale. Ninety percent of the time, you’re not going to need it. Paying for resources that you may never use is not being scale-ready. It will make you slow to respond when the next need to scale does occur.

 

What tools or technologies does your team use to support scalability, and why?

Some of the tools we leverage to make scaling easier for us are Kubernetes, actionable metrics with Prometheus and AWS-hosted services like RDS. Kubernetes provides us a way to scale our services (monolith and microservices) with automation and little effort. 

We rely heavily on metrics to gauge the health of our applications and services. Prometheus, along with Thanos, offers a scalable metric back end that will continue to grow with us. 

When it comes to our datastores, we typically use hosted offerings like Amazon’s RDS. Anytime you can entrust one aspect of your stack to a proven partner, that is one less thing you have to spend time and money on. If the team does not have to worry about scaling or backups for PostgreSQL, that is a win.

Lastly, we use Jira for project management and work tracking. If you do not understand how much work your team can do, then you will never know how or when to scale your team. Precision, as it relates to planned and unplanned work, is key to predicting when to scale up or down.

 

Ben Wright
Senior Software Engineer • Maxwell

Without a transparent and maintainable code base, Senior Software Engineer Ben Wright said that Maxwell employees wouldn’t be able to achieve their ultimate goal: opening up the homeownership process. To do so, Wright’s team has focused on building the company’s front-end infrastructure with a reusable, component-based architecture. They are also building an external component library every team member can access and rely on. 

 

In your own words, describe what scalability means to you.

As a front-end engineer, scalability means building a maintainable code base that can grow with additional users and developers. This is important for Maxwell as a company and for our technology specifically because, ultimately, our mission is to empower people to make mortgage lending simpler and more accessible. As you can imagine, increasing transparency is only achievable and impactful with more users and more customers. We need the data and scale to have a big impact in this massive industry, so scalability is critical. If we don’t think about scalability, we won’t succeed as a company.

 

How do you build this tech with scalability in mind? 

We have focused on building the front end with a reusable component-based architecture. We have built out an external component library to be the source of truth for engineering, design and product decisions. The library allows us to maintain consistent design in our UI/UX, establish patterns within our component code and provide guidelines for new developers to build new pages or components that match.

 

What tools or technologies does your team use to support scalability, and why? 

We use ESLint and RuboCop to enforce a common code style, vigorous code reviews (both asynchronous and in-person) to ensure code quality, thorough automated testing to prevent bugs and unintended consequences, and more.

 

Austin Mueller
Senior Software Engineer • Artifact Uprising

When it comes to site scalability, Artifact Uprising Senior Software Engineer Austin Mueller sees his job as ensuring performance isn’t impacted by the number of site visitors on any given day. To make that vision a reality, his team uses AWS S3 and Lambda in addition to autoscaling Kubernetes clusters to process customizable, digital photo orders as they come through.

 

In your own words, describe what scalability means to you. 

Because we build and sell products, our site activity varies greatly depending on the time of day and season. For this reason, scalability must be top of mind.

Scalability means having a reliable service that can handle two or two million customers without downtime, interruptions or delays in service. Second, we want to make sure that our infrastructure can scale up or scale down without manual oversight. Our goal is not to waste resources if only two people are using our site and not to let our customer experience suffer if two million people are using our site.  

 

How do you build this tech with scalability in mind?

Part of our day-to-day mindset is thinking ahead about what we are building and how it will behave under load. This means designing our infrastructure and applications in such a way that they can automatically scale up and down to handle traffic spikes. 

Our team puts a lot of effort into learning and implementing scalability best practices. We use Amazon Web Services (AWS), which has wonderful scalability features baked in, for many parts of our infrastructure. We want to make sure we are well equipped to use these services to their fullest potential.

 

What tools or technologies does your team use to support scalability, and why?

One critical feature of our application is the ability to upload and store photos. This service must be able to scale, especially as it requires a lot of network traffic. We work to keep upload wait times to a minimum, even if thousands of people are uploading photos at once. 

We are able to leverage the capabilities of AWS S3 and Lambda to provide an infinitely scalable service that does not degrade, no matter how many photos are being uploaded. We also use autoscaling Kubernetes clusters to process orders as they come through. As more orders are queued up, we can automatically spin up new servers to ensure that orders are processed quickly and efficiently.

 

Topher Lamey
Senior Software Engineer • StackHawk

Senior Software Engineer Topher Lamey emphasizes scalability in his work at StackHawk because of its role in building a high-quality software product for customers. In order for dev professionals to be most productive in the codebase, Lamey said there must be a certain number of changesets flowing through the CI/CD pipeline. Not only that, but the test/deploy process and architecture needs to function seamlessly. 

 

In your own words, describe what scalability means to you. 

Scalability, to me, means that the delivered software scales appropriately across multiple levels. The ability to easily triage and fix environment issues allows each team member to be highly productive in the codebase. 

When it comes to a software product that people will pay money for, scalability refers to how that product will handle the workload of its users (human or not). The product should predictably scale in terms of performance and resource usage to deliver functionality. Factors like monitoring resource usage and key system metrics, architecting services to distribute resource workloads, using proven technology, and writing and profiling performant code all contribute to scalability.

In the early stages of a company, it’s more important to be flexible and figure out what the product is. There’s no need to build for Facebook-scale at that point. However, as a company grows, scalability needs and expectations need to be identified and budgeted for.

 

How do you build this tech with scalability in mind?

We have around a half-dozen engineers, so we need some scalability around our dev process. We have to account for multiple engineers working simultaneously in the same codebase.

In terms of the software, we think about scalability as a requirement as we plan new functionality. As the functionality’s requirements are discussed, we talk about scalability needs. Some general questions as part of the discussion include: What needs to scale in this scenario? What would break first as the usage of this functionality increases? What resources are impacted? 

Then, as we implement functionality, we are collaborative about design options and choices. Our dev process has gates around automated testing and manual code reviews to help spot issues. We then deploy changes to environments that attempt to mirror production, including monitoring and alerting. This way we can be sure new changes scale appropriately.

 

What tools or technologies does your team use to support scalability, and why?

To help scale dev processes, we use GitFlow to simplify changeset management across our projects. Our entire build/deploy process is automated using a mix of Docker Compose, Kubernetes and AWS CodeBuild/ECS. As part of GitFlow, we gate merges to branches based on automated tests and peer code reviews. We deploy changes to test environments that closely mirror production so we have a high degree of confidence that scalability will not be impacted.

Some technologies we use because we know they scale are Spring Boot, Python, Kotlin, AWS RDS/PostgresSQL and Redis. Additionally, we use Logz.io and Grafana to help monitor and handle alerting for our systems. Our internal services communicate with each other using gRPC rather than JSON/REST. gRPC is a highly scalable Google technology that implements stateless RPC using language-agnostic definitions called protobufs. GRPC provides a way to define and share common RPC message and method definitions across the board. We’ve also gone with Kubernetes because we can easily set up service scaling rules around resource usage. Because our services are stateless, it’s relatively easy to spin up new instances to help process the workload.

 

Leonardo Amigoni
CTO • Fluid Truck

Fluid Truck Share’s CTO Leonardo Amigoni appreciates Google’s compiled programming language Golang because of its lightweight nature. He said it has allowed his team to focus on the community truck sharing platform’s business needs rather than being burdened by their own technology. Fluid Truck Share is built on microservices architecture so that the engineering team can scale parts of the system individually. 

 

In your own words, describe what scalability means to you. 

Scalability is a characteristic of a software system or organization that describes its capability to perform well under an increased workload. A system that scales well can maintain its level of efficiency during increased operational demand.

Scalability has become increasingly relevant at Fluid Truck as we acquire more customers and expand into new markets. For this reason, we have migrated away from the traditional monolith application paradigm in favor of a microservice architecture. In a traditional monolith application, all system features are written into a single application. Often they are grouped by feature type, controllers or services. For example, a system may group all user registration and management under an authorization module. This module may contain its own set of services, repositories or models. But ultimately, they are still contained with a single application codebase. 

When certain areas of the codebase need to scale with an increase in user demand, monolith applications often require scaling the entire application. With microservices, the separation of system components allows parts of a system to scale individually.  

 

How do you build this tech with scalability in mind?

Fluid Truck has adopted Golang and Kubernetes for our microservicing needs. Golang’s lightweight nature has allowed us to focus on our business needs rather than being burdened by our technology. Its simplicity has allowed us to expand our infrastructure and maintain our platform by accommodating developers from all backgrounds. 

Moreover, we chose Go because of its simple concurrency model. In an environment where concurrency and parallelism is a must, Go routines have allowed us to scale processes across multiple processor cores using a simpler multi-threaded model for execution than what we previously had.

 

What tools or technologies does your team use to support scalability, and why? 

Kubernetes allows us to manage our infrastructure by deploying machine-agnostic microservices that can be replicated just about anywhere. It is an orchestration tool for containers that ensures our platform scales based upon demand. Microservices are easily scaled using a combination of load balancers and replication sets. We sought to automate our platform’s scalability by containerizing our microservices. Kubernetes was the right tool to help us manage this task.

 

Chris Jansen
Senior Software Engineer • The Trade Desk

When Senior Software Engineer Chris Jansen first joined The Trade Desk six years ago, the adtech company was handling two million queries per second globally. Now they’re handling up to 11 million. Jansen said that, among other strategies, his team has had a lot of success refactoring their own code to reduce complexity and optimize memory usage.

 

In your own words, describe what scalability means to you. 

When I think of scalability, I think about our platform’s literal ability to scale. Scalability is important at The Trade Desk because, as we’ve often said internally, we’re only 2 percent done. Advertising is a $600 billion industry, and we’re always expanding our piece of the pie. To successfully grow as much and as quickly as we have, we have to consider scalability early. It’s built in to how we think about every feature we design and build here.

 

How do you build this tech with scalability in mind? 

A distributed architecture is a fundamental building block to scalability. It allows each major component to scale independently as we grow. For example, at The Trade Desk, the components that handle incoming advertising opportunities or bid requests have had to scale more quickly to account for new inventory sources such as connected television. Compare this with our UI, where user growth has been more linear.

Another core strategy for us is frequently analyzing central processing unit and memory performance to see where we can improve. We’ve had a lot of success refactoring our own code to reduce complexity and optimize memory usage.

 

What tools or technologies does your team use to support scalability, and why? 

We are increasingly turning to containerized components and tools like Kubernetes and Airflow for management and scaling. Containers are easier to manage and more flexible than dedicated servers. We’re also using Spark for our more data-dense analytics and machine learning.

 

Hiring Now
Sierra Space
Aerospace • Hardware • Information Technology • Robotics