Google Workspace security whitepaper

How Google Workspace protects your data

Technology with security at its core

As an innovator in hardware, software, network and system management technologies, Google used the principle “defense in depth” to create an IT infrastructure that is more secure and easier to manage than more traditional technologies. We custom-designed our servers, proprietary operating system and geographically distributed data centers to ensure that Google Workspace runs on a technology platform that is conceived, designed and built to operate securely.

State-of-the-art data centers

Google’s focus on security and protecting data is among our primary design criteria. Our data center physical security features a layered security model, including safeguards like custom-designed electronic access cards, alarms, vehicle access barriers, perimeter fencing, metal detectors, and biometrics, in addition to data center floors that feature laser beam intrusion detection.

Our data centers are monitored 24/7 by high-resolution interior and exterior cameras that can detect and track intruders, with access logs, activity records, and camera footage available in case an incident occurs. Data centers are also routinely patrolled by experienced security guards who have undergone rigorous background checks and training.

The closer you get to the data center floor, the tighter these security measures become. In fact, less than one percent of Google employees will ever set foot in one of our data centers. Those that do have specific roles have been pre-approved and access the floor in the only way possible: through a security corridor that implements multi-factor access control using security badges and biometrics.

Powering our data centers

To keep things running 24/7 and ensure uninterrupted services, Google’s data centers feature redundant power systems and environmental controls. Cooling systems maintain a constant operating temperature for servers and other hardware, reducing the risk of service outages. In case of an incident, every critical component has a primary power source and an equally powerful alternate. Our diesel engine backup generators can provide enough emergency electrical power to run each data center at full capacity. Fire detection and suppression equipment—including heat, fire, and smoke detectors—triggers audible and visible alarms in the affected zone, at security operations consoles, and at remote monitoring desks, helping to prevent hardware damage.

Environmental impact

Google cares deeply about minimizing the environmental impact of our data centers, to the point that we design and build our own facilities using the latest “green” technology. We install smart temperature controls, utilize “free-cooling” techniques like using outside air or reused water for cooling, and redesign how power is distributed to reduce unnecessary energy loss. We constantly gauge how we’re doing by calculating the performance of each facility using comprehensive efficiency measurements.

We’re proud to be the first major Internet services company to gain external certification of our high environmental, workplace safety, and energy management standards throughout our data centers. Specifically, we achieved voluntary ISO 14001, OHSAS 18001 and ISO 50001 certifications, which are all built around a very simple concept: Say what you’re going to do, then do what you say—and then keep improving.

Custom server hardware and software

Google’s data centers house energy-efficient custom, purpose-built servers and network equipment that we design and manufacture ourselves. Our production servers also run a custom-designed operating system (OS) based on a stripped-down and hardened version of Linux. In other words, Google’s servers and their OS are designed for the sole purpose of providing Google services, which means that, unlike much commercially available hardware, Google servers don’t include unnecessary components such as video cards, chipsets, or peripheral connectors, that can introduce vulnerabilities. Google server resources are dynamically allocated, allowing for flexibility in growth and the ability to adapt quickly and efficiently, adding or reallocating resources based on customer demand. This homogeneous environment is maintained by proprietary software that continually monitors systems for binary modifications. If a modification is found that differs from the standard Google image, the system is automatically returned to its official state. These automated, self-healing mechanisms enable Google to monitor and remediate destabilizing events, receive notifications about incidents, and slow down potential network compromises before they become critical issues.

Hardware tracking and disposal

Google uses barcodes and asset tags to meticulously track the location and status of all equipment within our data centers from acquisition and installation, to retirement and destruction. We have also implemented metal detectors and video surveillance to help make sure no equipment leaves the data center floor without authorization. During its lifecycle in the data center, if a component fails to pass a performance test at any point, it is removed from inventory and retired.

Each data center adheres to a strict disposal policy and any variances are immediately addressed. When a hard drive is retired, authorized individuals verify that the disk is erased, writing zeros to the drive and performing a multiple-step verification process to ensure it contains no data. If the drive cannot be erased for any reason, it is stored securely until it can be physically destroyed. This physical destruction is a multistage process beginning with a crusher that deforms the drive, followed by a shredder that breaks the drive into small pieces, which are then recycled at a secure facility.

A global network with unique security benefits

Google’s IP data network consists of our own fiber, public fiber, and undersea cables, enabling us to deliver highly available and low latency services across the globe.

With other cloud services and on-premises solutions, customer data must make several journeys between devices, known as “hops,” across the public Internet. The number of hops depends on the distance between the customer’s ISP and the solution’s data center, and each additional hop introduces a new opportunity for data to be attacked or intercepted. Because it’s linked to most ISPs in the world, Google’s global network can limit the number of hops across the public Internet, improving the security of data in transit.

Defense in depth describes the multiple layers of defense that protect Google’s network from external attacks. It starts with industry-standard firewalls and access control lists (ACLs) to enforce network segregation, and all traffic being routed through custom Google Front End (GFE) servers to detect and stop malicious requests and Distributed Denial of Service (DDoS) attacks. Additionally, GFE servers are only allowed to communicate with a controlled list of servers internally, a “default deny” configuration that prevents GFE servers from accessing unintended resources. Finally, logs are routinely examined to reveal any exploitation of programming errors, and access to networked devices is restricted to authorized personnel. The bottom line? Only authorized services and protocols that meet our security requirements are allowed to traverse our network, anything else is automatically dropped.

Encrypting data in transit and at rest

Encryption is an important piece of the Google Workspace security strategy, helping to protect your emails, chats, video meetings, files, and other data. First, we encrypt certain data as described below while it is stored “at rest”—stored on a disk (including solid-state drives) or backup media. Even if an attacker or someone with physical access obtains the storage equipment containing your data, they won’t be able to read it because they don’t have the necessary encryption keys. Second, we encrypt all customer data while it is “in transit”—traveling over the Internet and across the Google network between data centers. Should an attacker intercept such transmissions, they will only be able to capture encrypted data. We’ll take a detailed look at how we encrypt data stored at rest and data in transit below.

Google has led the industry in using Transport Layer Security (TLS) for email routing, which allows Google and non-Google servers to communicate in an encrypted manner. When you send email from Google to a non-Google server that supports TLS, the traffic will be encrypted, preventing passive eavesdropping. We believe increased adoption of TLS is so important for the industry that we report TLS progress in our Email Encryption Transparency Report. We also improved email security in transit by developing and supporting the MTA-STS standard allowing receiving domains to require transport confidentiality and integrity protection for emails. Google Workspace customers also have the extra ability to only permit email to be transmitted to specific domains and email addresses if those domains and addresses are covered by TLS. This can be managed through the TLS compliance setting.

For further information on encryption, please see our Google Workspace Encryption whitepaper.

Low latency and highly available solution

Google designs all the components of our platform to be highly redundant, from our server design and how we store data, to network and Internet connectivity, and even the software services themselves. This “redundancy of everything” includes error handling by design and creates a solution that is not dependent on a single server, data center, or network connection.

Google’s data centers are geographically distributed to minimize the effects of regional disruptions such as natural disasters and local outages. In the event of hardware, software, or network failure, data is automatically shifted from one facility to another so that, in most cases, Google Workspace customers can continue working without interruption. This also means customers with global workforces can collaborate on documents, video conferencing and more without additional configuration or expense, sharing a highly performant and low latency experience as they work together on a single global network.

Google’s highly redundant infrastructure also helps protect our customers from data loss. For Google Workspace, our recovery point objective (RPO) target is zero, and our recovery time objective (RTO) design target is also zero. We aim to achieve these targets through live or synchronous replication: actions you take in Google Workspace products are simultaneously replicated in two data centers at once, so that if one data center fails, we transfer your data over to the other one that’s also been reflecting your actions.

To do this efficiently and securely, customer data is divided into digital pieces with random file names. Neither the content nor the file names of these pieces are stored in readily human-readable format, and stored customer data cannot be traced to a particular customer or application just by inspecting it in storage. Each piece is then replicated in near-real time over multiple disks, multiple servers, and multiple data centers to avoid a single point of failure. To further prepare for the worst, we conduct disaster recovery drills that assume individual data centers—including our corporate headquarters—won’t be available for 30 days.

Service availability

Some of Google’s services may not be available in some jurisdictions currently or temporarily. Google’s Transparency Report shows recent and ongoing disruptions of traffic to Google products. Our code allows us to observe worldwide traffic patterns over time, enabling us to detect significant changes. We also look into our graphs when we receive inquiries from journalists, activists, or other people on the ground. We provide this data to help the public analyze and understand the availability of online information.