Everything You Need to Know About Security for a Frontend Engineer Job Interview

Everything You Need to Know About Security for a Frontend Engineer Job Interview

My favorite question during an interview for a Software Engineer position is:

What do you do as a software engineer to ensure that the applications you write are secure? This is a very broad question, so please answer based on your experience.

Security is such a broad and important topic that we could discuss this question for hours. Unfortunately, most candidates have next to nothing to say. It is both surprising and terrifying. In this blog post, I will focus on the security aspects of creating web applications as I interview candidates for a Web Engineer position (both backend and frontend).

To understand how we can secure web applications, we first need to understand the enemy. While most candidates can describe e.g., XSS or SQL injection on a high level, they fail to provide specific examples of vulnerable code snippets. Let’s see then how various types of vulnerabilities can slip into your source code.


Cross-site scripting (XSS) is a type of attack where a malicious code (script) injected by one user is executed in the context of another user. There are various types of XSS, and I am presenting two examples of stored XSS below. (The malicious script is stored in the database. It’s then fetched and executed by an unsuspecting user after opening one of the application pages.)

XSS in React application

While ReactJS by design escapes all input strings (unless one decides to use dangerouslySetInnerHTML) it is not bulletproof. The simplest way to introduce XSS in a React app is to put a user-provided URL directly into the href attribute of a <a> tag:

export default function App() {
  // This URL is provided by a user - e.g. a link to his home page
  const homePage = "javascript:alert('XSS');";
  return (
    <div className="App">
      <h1>Click the link below</h1>
      <a href={homePage}>My Home page</a>

You can try it yourself at confident-knuth-d8j3r2 – CodeSandbox!

XSS in dependencies

Any non-trivial web application uses a lot of npm libraries. Library authors do their best to provide the most convenient API possible. But if you check how these neat React components are actually implemented, you might discover a jQuery plugin, direct manipulation with DOM elements, or dangerouslySetInnerHTML that don’t provide the security guarantees of ReactJS.

Let’s assume you were asked to add a nice tooltip to one of the elements on your application dashboard. As this application is highly customizable, organization admin can set custom colors for the dashboard to match their company branding.

There’s a very popular (~1.3M weekly downloads) library called react-tooltip. It has a nice API, so you npm install it and end up with the following code:

// In this scenario, primary color can be defined by the user
// in the theme settings screen in the product configuration
const primaryColor = useOrganizationPrimaryColor();
return (
    <p data-tip="This is a very important message">
      Hover me
    <ReactTooltip textColor={primaryColor} />

It looks and works great, so you call it a day and move to the next task. A few weeks later, the Product Owner asks you to bold the “very important” part in the tooltip message. react-tooltip seems to support the data-html attribute that allows you to use HTML in the tooltip content. As this is a static text you fully control, it’s reasonable to think it’s fully secure.

const primaryColor = useOrganizationPrimaryColor();
return (
    <p data-html data-tip="This is a <strong>very</strong> important message">
      Hover me
    <ReactTooltip textColor={primaryColor} />

What you don’t realize is that there’s the following code in react-tooltip:

 if (html) {
  const htmlContent = `${content}${
    style ? `\n<style aria-hidden="true">${style}</style>` : ''
  return (
      id={this.props.id || uuid}
      ref={(ref) => (this.tooltipRef = ref)}
      dangerouslySetInnerHTML={{ __html: htmlContent }}

So content used in dangerouslySetInnerHTML is a concatenation of the tooltip content (which is fully expected) and tooltip styles, including the value of the textColor attribute. If the provided value of textColor is "red; </style> <img src=x onerror=alert('XSS')></img>"; we have an exploit.

This by itself is not that bad. In the worst case, the organization admin will inject a custom script. Not a big deal. But malicious actors can take advantage of another vulnerability in the system to successfully run an attack. If the endpoint that saves the theme configuration fails to check permissions properly (Broken Access Control) and allows an unauthorized user to set the primary color for a theme, we have an exploitable vulnerability.

In the example above, the XSS and Broken Access Control separately are minor problems but combined they allow to execute scripts as other users. And while it might be unlikely to experience both issues simultaneously, in a large code base, with multiple teams working on it over a long time, these coincidences are inevitable. That’s exactly why we need to have several different security mechanisms in place — if one fails, the others can still stop or limit the attack (that’s the Defense in depth concept).

You can see the full source code of the example above with an exploit here: dreamy-dream-1fwt60 – CodeSandbox.

SQL injection

For some reason, candidates always mention SQL injection as a type of vulnerability and then mostly fail to provide a specific example of a vulnerable code. As ORM’s are now industry-standard, we’re no longer crafting database queries manually, so SQL injection is not a common problem. Still, it would be nice to understand vulnerability if you mention it during an interview.

Let’s assume you are working on a login functionality for your app. And you write a query that checks if provided user credentials are valid in your backend code:

const query = `SELECT * FROM users WHERE login = '${login}' AND password = '${password}'`;

As both login and password are provided by the user, they can contain anything. Specifically, the provided password can be ' OR ''='. In this case, the query executed by the database is:

const query = `SELECT * FROM users WHERE login = '' AND password = '' OR ''=''`;

The OR ''='' clause always evaluates to TRUE, allowing the attacker to bypass the sign-in mechanism.

You can find a working example here: prod-resonance-g3hd9f – CodeSandbox.

Passwords should never be stored in the database in plain text. In a production application, you should store a salted hash of the password generated e.g., using the bcrypt algorithm (MD5 is considered insecure, and SHA is too fast, making brute-force attacks easier).

Prototype pollution

This class of vulnerabilities is caused by a deprecated JS feature — you can access object prototype with the object.__proto__ syntax. If you have an endpoint that merges user data with some object and are not careful enough, you might introduce prototype pollution. Let’s take the following ExpressJS application as an example:

const express = require("express");
const lodash = require("lodash");
const app = express();
app.get("/", (req, res) => {
  // The line below uses Object.prototype.toString under the hood
  console.log("Request: " + req)
  res.send("Hello world");
app.post("/", (req, res) => {
  // We merge default data with user provided data using lodash.merge
  const data = lodash.merge({ example: true }, req.body);
const listener = app.listen(8080, function () {
  console.log("Listening on port " + listener.address().port);

If the server runs on a vulnerable version of lodash, it’s enough to POST the following payload to effectively break the GET / endpoint:

	"__proto__": {
		"toString": "hacked!"

You can see an example of a vulnerable server here: https://codesandbox.io/p/sandbox/ecstatic-lichterman-yhb4yi?file=%2Fapp.js.

While this works only because the server uses an old version of lodash, it turns out that prototype pollution in utility libraries is very common: GitHub Advisory Database . I recommend using only one of the most popular utility libraries in its latest version to minimize the risk of introducing this problem.

Other types of attacks

We have just scratched the surface of possible vulnerabilities of web applications. This is an endless topic and there are dozens of types of vulnerabilities. I highly encourage you to bookmark GitHub – qazbnm456/awesome-web-security: 🐶 A curated list of Web Security materials and resources. and dedicate at least an hour weekly to go through the articles there.

What you can do to secure web applications

It’s very easy to introduce a security vulnerability; no single golden bullet would prevent that. Instead, we use a combination of processes and tools to reduce the risk of introducing vulnerability and minimize the impact once it’s there. It means you can (should) take a lot of different steps to secure your web applications. Below, I’ll share some things my team does to secure our products.

A lot of the things below come from security requirements for Atlassian Marketplace applications: Security requirements for cloud applications.


  • use HTTPS with TLS 1.2 or newer (older versions are deprecated);
  • use battle-tested cryptographic libraries (no home-made ciphers and protocols);
  • set basic security headers. In ExpressJS, helmet can do this for you: GitHub – helmetjs/helmet: Help secure Express apps with various HTTP headers . As I don’t like a cargo cult, I highly encourage you to understand what these headers do at some point in your career: OWASP Secure Headers Project | OWASP Foundation;
  • set cookies securely: Session Management – OWASP Cheat Sheet Series;
  • keep production environment logically separated from staging/dev (e.g., different account in AWS, different project in GCP);
  • always use two-factor authentication. Choose YubiKey instead of the mobile authenticator app to make phishing attacks much more difficult. Also, see how Cloudflare prevented phishing attack thanks to YubiKey: The mechanics of a sophisticated phishing scam and how we stopped it;
  • follow the principle of the least privilege — applications should request permissions required to deliver their functionality and nothing more. It applies to user and service accounts in AWS/GCP too;
  • keep all secrets in a secret manager. Never store them in the source code. If you move secrets from source code to the secret manager, remember to rotate them (it’s easy to read secrets from the old revision of the code);
  • always do a code review of new code. Correctness, security, and performance should be the focus of the code review.


  • run static code analysis (e.g., Sonar Cloud) as part of the CI — while Sonar Cloud reports a fair amount of false positives, it still occasionally catches issues, so it’s worth it;
  • scan project dependencies for known vulnerabilities (Snyk/npm audit) — while these solutions can generate a lot of noise, you should strive to get rid of at least all critical and high vulnerabilities.

Content-Security-Policy (CSP)

This is a very powerful tool against XSS attacks. As an application developer, you can instruct the browser to load and execute only JavaScript files coming from whitelisted domains. Scripts from other domains and inline scripts will be forbidden. It means even if someone injects a malicious script (XSS), the browser will refuse to execute it.

Still, CSP is tricky to get right. I personally loved Github’s blog posts on their CSP journey and highly encourage you to read them:


If there’s one thing you should remember from this blog post, it’s Bugcrowd, which is a very popular bug bounty platform. We (Appfire) pay professional pen testers for each security vulnerability they find in our apps. And these folks are very creative. Even though we think about security at each stage of product development, they still find vulnerabilities in our products.

If you don’t do any external security testing, you have a false impression that your product is secure. I assure you it’s full of vulnerabilities; you just don’t know about them. Each report from Bugcrowd is a lesson of humility for us. Also, with each report, we learn and set the bar for the next report a little higher.


Securing web applications is an ongoing process. By following best practices, you can minimize risks but never completely eliminate them. Still, getting rid of the low-hanging fruits detracts potential attackers. Attacking your application simply needs to make sense financially so you at least get rid of script kiddies by raising the bar.

But remember that security is often about subtle things. Even if you do everything “right” there are vectors of attack you simply did not think about. For example, one can guess a password based on code execution time, and there is a special method in the NodeJS standard library that compares strings in a constant time to prevent this kind of attack.

There is always something else to learn about security. For me, security is one of the topics that shows candidates’ seniority during job interviews. If you feel that too, remember — we are hiring!