Data Center Equipment

Explore top LinkedIn content from expert professionals.

  • View profile for Ashish Shorma Dipta

    Power System Engineer 🌎 |⚡Empowering Reliable Power Distribution

    39,144 followers

    ⚡ What really keeps a Tier III data center running 24/7—even during failures? It’s not just backup power. It’s how power flows through a fully redundant, concurrently maintainable design. Let’s break down Tier III data center power flow in a simple way 👇 🔁 Tier III isn’t about zero failures — it’s about zero downtime during maintenance or single faults. Here’s how the power path makes that possible: 🔌 1. Dual Utility / Source Paths ➡️ Independent A & B power paths ➡️ Either path can carry the full IT load ➡️ No single point of failure 🔋 2. UPS with N+1 Redundancy ➡️ Continuous, clean power to IT load ➡️ Batteries bridge the gap during utility loss ➡️ Maintenance possible without shutdown ⚙️ 3. Generator Backup ➡️ Automatically starts during prolonged outages ➡️ Supports full load via either path ➡️ Fuel redundancy ensures extended runtime 🧱 4. Switchgear & PDUs ➡️ Power routed through redundant switchboards ➡️ PDUs distribute power to racks independently ➡️ Faults isolated without affecting IT equipment 💻 5. IT Load (Dual-Corded Equipment) ➡️ Servers powered by both A & B paths ➡️ Loss of one path = no service interruption 💡 Tier III data centers are designed for concurrent maintainability — 👉 Any single component can be taken out of service without impacting operations. This is why Tier III remains the industry standard for enterprise and mission-critical facilities. 🔎 If you work with data centers, power systems, or critical infrastructure, understanding this power flow is essential. ♻️ Repost to share with your network if you find this useful. 🔗 Follow Ashish Shorma Dipta for more posts like this. #DataCenter #PowerDistribution #ElectricalEngineering #DataCenterDesign #PowerSystems #DataCenterOperations

  • View profile for Simon Weiher
    Simon Weiher Simon Weiher is an Influencer

    Leading Transformation Projects @ Axpo | Team Builder, Business Developer, Strategist and Speaker for Energy & Mobility, ex-McKinsey

    6,726 followers

    EnergyEvening: Northern Norway - where the largest European data centers get built. Our journey through Northern Norway also took us past the construction site of Europe’s largest datacenter. The location: Close to Narvik, in the middle of nowhere, but close to a high-voltage substation. Why here? First and foremost, there is plenty of power at attractive costs. NO4 (the name of the Northern Norway electricity zone) still exported up to 1 GW of power to Sweden and the rest of Norway during most of the last two weeks. And it should be noted that these were exceptional weeks with temperatures typically averaging below -5°C. Now consider that plenty of households are heating their homes with electric resistance heaters and heat pumps. So, these were high consumption weeks. Power prices were thus higher than usual during this period but still below 100 EUR/MWh even during peak hours. Secondly, the cold temperatures come with another side effect: Data center produce a lot of heat and the arctic temperatures provide highly efficient natural cooling. This reduces the power consumption and costs. It is hence not surprising that the developers of the Stargate Norway project have settled here. The datacenter will consume 230 MW electric power in the first stage and could see a later extension to 520MW, a size that outruns most - if not all - datacenters in the rest of Europe. Developed and built by Aker and Nscale, equipped with Nvidia technology and serving OpenAI and Microsoft as initial customer - this datacenter has the most important players involved. I am really excited to see this growth opportunity for such a remote region. I wish we could create similar projects in Switzerland. If we manage to find real estate and build power infrastructure, this might even be possible. It just needs a bit of pioneering spirit. Read more about the project here: https://lnkd.in/e_7wtTeR

  • View profile for Kris McGee

    Advisor, Senior VP, eXp Commercial | Dirt Dawg | I Sell Land, Sometimes It Has Stuff On It | 32 Years Helping Visionary Investors See What Others Miss

    5,490 followers

    Everyone's chasing data center land. Almost everyone is missing the real constraint. It's not fiber. It's not even land. It's power. U.S. Interior Secretary Doug Burgum said at the Prologis conference: "To win the AI arms race against China, we've got to figure out how to build these artificial intelligence factories close to where the power is produced, and just skip the years of trying to get permitting for pipelines and transmission lines." Translation: The next generation of data centers won't be built where the land is cheap. They'll be built where the power is available. Three implications for dirt investors: 1. Nuclear Proximity = New Premium: Amazon already signed deals with Dominion Energy near the North Anna nuclear power station in Virginia and expanded partnerships with Talen Energy at the Susquehanna nuclear plant. Sites within transmission distance of existing nuclear facilities just became exponentially more valuable. 2. Warehouse Conversions Accelerate: If Prologis is eyeing their 6,000 buildings for data center conversion, every industrial site with surplus power capacity needs re-evaluation. What looks like a struggling warehouse today might be a data center tomorrow. 3. Grid Capacity > Geographic Desirability: Constellation Energy CEO Joseph Dominguez noted that data economy customers "want to run their systems 24-7" with "firm pricing so that they know the price for energy for 20 years". Long-term power contracts are becoming the new land entitlements. But here's what nobody's talking about: The same power constraints driving this opportunity are also creating massive project risks. According to a recent CoStar analysis, data centers will account for up to 60% of total power load growth through 2030. But there's a timing mismatch: data centers take 2-3 years to build, while power system upgrades take 8 years. That gap is forcing developers to either wait or find sites with existing capacity. The Community Resistance Factor Data Center Watch estimates $64 billion in data center projects were blocked or delayed over a recent two-year period. There are now 142 activist groups across 24 states organizing against data center development. Northern Virginia alone-the nation's largest data center market-has 42 activist groups fighting projects. Reasons cited: water consumption, higher utility bills, noise, decreased property values, loss of open space. Translation for land investors: Sites with existing power capacity + community support just became exponentially more valuable than sites with just land and zoning. The power infrastructure thesis isn't just about finding available capacity. It's about finding that capacity in counties that actually want data centers. Not every market will roll out the welcome mat. Are you evaluating community sentiment alongside power infrastructure access?

  • View profile for Ronald Philip

    Real estate investment leadership | Ex McKinsey | Harvard & IIM alum | Logistics & industrial real estate | Data centers | Hospitality | Transport infrastructure | Strategy | M&A | Value creation | Middle East | Africa

    26,322 followers

    Land acquisition and securing utilities and permitting for data centers in Africa can be a harrowing experience. Somebody needs to write up a digest of all the crazy stories 😅 I enjoyed sharing Agility Logistics Parks’ experiences in land acquisition and securing utilities and permitting for our pan-African logistics parks platform, at ITW Africa in Nairobi - it’s given us invaluable experience and land bank that we hope to leverage to help data center operators grow faster in Africa. At ITW Africa, I heard two data center operators talk about how they discovered the land plots they bought actually belonged to different owners. A CEO of an African data center operator shared his fascinating story. He wanted to build his data center close to the cable landing station and identified land in a village close to the CLS. The problem was that many villagers claimed to own the same land parcels and there would be many sales / transactions of the same land plots. So when he eventually bought a land plot, he ensured the village chief and four other senior elders from the village were witnesses. The CEO went himself to the land registry to ensure it was registered properly. Imagine a data center CEO having to do that in any developed market. There was no road to the site so he built a road to the site. There was no grid connection to the site so he built his own independent power plant. It was a journey of over a decade. It was a first mover advantage with such high barriers to entry. And sure enough, demand came from global hyperscalers and his business became a very attractive one. Good things do happen in Africa to those who are patient and persistent - and who can figure out land acquisition, utilities and permitting in a timely way 😅 #africa #datacenters #datacentres #digitalinfrastructure

  • View profile for TOH Wee Khiang
    TOH Wee Khiang TOH Wee Khiang is an Influencer

    Director @ Energy Market Authority | Biofuels, Geothermal, Hydrogen, CCUS

    34,096 followers

    There are both energy and embodied carbon savings. "A prefabricated modular data centre is a data centre that has its systems (hardware and software) preassembled, integrated and tested in a factory environment. These systems may be mounted on a structure – called a skid – or installed within some kind of enclosure. Since they are built in controlled environments, prefabricated data centres have high quality and consistency. Our analysis shows prefabricated modules can also be deployed 40 per cent faster than a traditional build with the same infrastructure. On top of saving time, prefabrication saves on resources. There is little wastage of materials. Also, capacity can be added as needed rather than being built in right from the start. This approach is particularly beneficial for companies experiencing rapid growth or fluctuating workloads driven by artificial intelligence (AI) and edge computing. We have also found energy savings of 20 per cent from prefabricated modules, as the pre-engineered design of the modules allows for better integration of power and cooling system controls. Prefabricated data centre modules can also be used in existing buildings, making them suitable for anyone looking to repurpose an existing building for data centre use. As Singapore’s economy adapts to the AI movement, prefabrication technology is one way to upgrade existing space." https://lnkd.in/gKwuFTTi

  • View profile for Adam Bergman
    Adam Bergman Adam Bergman is an Influencer

    Technology & Sustainability Strategic Thought Leader with 25+ Years of Investment Banking Experience / LinkedIn Top Voice for Finance

    16,721 followers

    Increasing both the capacity and number of data centers is fundamental to the growth of AI, but they have become a lightning rod for criticism from local residents and politicians alike as they are causing higher energy prices and using scarce water resources in a growing number of regions globally. On a recent episode of the Bloomberg Switched on Podcast, Tom Rowlands-Rees and Lloyd Arnold, BloombergNEF's Global Power Analyst and Data Center Analyst, respectively, discussed “What Really Determines Where Data Centers Get Built”. The decision about where to site data centers is becoming more complex, with decision makers having to factor in energy & water availability and cost, as well as land permitting. However, other criteria are becoming more important, including taxes, fiber connectivity, and existing ecosystems, which are impacting competitiveness, given that tech companies remain focused on sustainability and net zero initiatives. Key takeaways from the podcast include: · Power constraints are now the biggest bottleneck - Many regions face grid congestion, long interconnection queues, and rising competition for electricity from AI, electrification, and industrial loads. Access to reliable, low‑carbon power is becoming a decisive factor in site selection. · Regional competitiveness is diverging - Markets with streamlined permitting, strong renewable‑energy pipelines, and supportive policy frameworks are pulling ahead. Others are struggling with regulatory complexity, land scarcity, or slow infrastructure build‑out. · Construction timelines are lengthening - Supply‑chain pressures, skilled‑labor shortages, and stricter environmental reviews are extending development cycles. Speed to market is becoming a differentiator — and a challenge. · Geopolitics and resilience matter more than ever - Operators are diversifying locations to reduce exposure to geopolitical risk, extreme weather, and single‑grid dependency. Redundancy is becoming a strategic asset. · Permitting and land availability remain major hurdles in dense metros, pushing operators toward secondary markets. · AI workloads are reshaping design, driving higher rack densities[JB1] , new cooling strategies, and unprecedented energy demand forecasts. · Sustainability pressures are rising, with operators expected to prove real emissions reductions, not just offsets. Data center growth will continue, although some regions will be slower due to the challenges mentioned above. However, with so much capital being invested into the AI sector, we should expect that data center hyperscalers will be willing to overpay for the power and water needed to start the permitting and building process.      Listen on Apple Podcasts: https://lnkd.in/gSw5GwKM #ai; #datacenters; #hyperscalers; #renewableenergy; EcoTech Capital Cy Obert Jeffrey Lipton

  • View profile for Sivanesan Kupusamy

    Senior Project Manager | Senior Commissioning Manager | MEP | Data Center | 275/132/33/11kv Substations & Data Hall Delivery | PMC | ASEAN & Global

    5,142 followers

    𝗥𝗲𝗱𝘂𝗻𝗱𝗮𝗻𝗰𝘆 𝗶𝗻 𝗗𝗮𝘁𝗮 𝗖𝗲𝗻𝘁𝗲𝗿: Redundancy is all about ensuring continuous availability of power and critical systems, even if one component fails. It is a key factor in achieving higher uptime tiers (Tier I–IV by Uptime Institute). Here’s a breakdown of the types of redundancy and how utility, genset, and UPS work together to provide it: Types of Redundancy 1. N (No Redundancy) Only one path for power supply.If it fails, downtime occurs.Used in small facilities or cost-sensitive setups. 2. N+1 Redundancy One extra (spare) unit for every group of “N” required. Example: If 4 UPS modules are needed, an extra one (5th) is installed. Allows maintenance or a single failure without outage. 3. 2N Redundancy (Fully Redundant) Two independent power paths (A and B), each capable of carrying full load. Example: Each rack has dual power feeds (A-side and B-side). If one entire path fails, the other takes over seamlessly. 4. 2(N+1) Redundancy Each independent path has its own N+1 configuration.Very high reliability, but also very costly. Used in hyperscale or Tier IV data centers. How Redundancy Works Across Power Sources 1. Utility Supply Primary source of power. Normally, two separate utility feeders may be provided (dual utility). In Tier III/IV facilities, each feeder connects to a separate switchgear bus (A & B). If one feeder is down, the other maintains supply. 2. Generators (Gensets) Act as backup power when utility fails. Redundancy is ensured by: N+1 gensets: one extra engine than required. Parallel configuration: multiple gensets run together for load sharing. Automatic Transfer Switch (ATS) or Static Transfer Switch (STS) ensures smooth changeover. 3. Uninterruptible Power Supply (UPS) Provides instant power during switchover (bridges gap until gensets start). Redundancy setup: N+1 UPS modules (modular UPS architecture). 2N UPS systems with independent A & B feeds to IT racks. Battery autonomy usually 5–15 minutes to cover genset start-up time. End-to-End Redundancy Flow 1. Normal Mode: Utility → UPS → IT Load (with gensets on standby). 2. Utility Failure: UPS instantly supplies load from batteries → gensets auto-start → gensets stabilize → power transferred to UPS → IT load continues unaffected. 3. Redundancy Assurance: If one UPS or genset fails, others in N+1 or 2N setup carry the load. Dual-cord servers get A-side and B-side feeds from independent paths. Image Source : https://lnkd.in/gwcvfxxr

  • View profile for Gedeon. Kitoko

    Electrical & Mechatronics Engineer | Certified Data Centre Professional (CDCP, CDCPT) | Expert in Power, HVAC & Critical Infrastructure

    1,313 followers

    Understanding Data Center Redundancy The Backbone of Uptime In mission-critical environments, downtime is not an option. The true strength of a data center lies in how well it handles failure and that’s where redundancy design becomes essential. 🔹 N (Basic Capacity) Single path, no backup. All systems operate at full load. ➡️ Risk: Any failure = downtime 🔹 N+1 (Resilient Design) One additional component (UPS, chiller, generator) for backup. ➡️ Advantage: Maintenance or failure without service interruption 🔹 2N (Full Redundancy) Two completely independent systems running in parallel. ➡️ Advantage: No single point of failure — maximum reliability 🔹 2N+1 (Ultimate Availability) Full duplication + additional spare capacity. ➡️ Advantage: Designed for Tier IV, zero downtime environments. Redundancy is not just about adding equipment — it’s about eliminating single points of failure, ensuring fault tolerance, and maintaining continuous operation under all conditions. In real-world data centers, the challenge is balancing:

  • View profile for Pavel Purgat

    Innovation | Energy Transition | Electrification | Electric Energy Storage | Solar | LVDC

    27,331 followers

    🔌 The state-of-the-art power system in the data centre utilises 400 V AC connected to the MV grid via a low-frequency transformer (LFT) and distributed power factor correction (PFC) rectifiers at the rack level, achieving an overall efficiency of approximately 97.1% from MVAC input to the rack-level 400 V/48 V DC-DC conversion. Increasing the AC distribution voltage to 690 V may enhance the overall efficiency to about 97.8 % due to reduced distribution losses, as losses in identical busbars decrease with the square of the voltage. PFC rectifiers suitable for 690 V AC can be employed with three-level topologies, maintaining high conversion efficiency. Alternatively, an 800 V DC (±400 V DC) distribution system can result in slightly lower distribution losses than the 690 V AC system. Additionally, there are other advantages to DC, such as the straightforward and efficient integration of battery energy systems.   💡 In principle, three conceptual approaches to MVAC-LVDC conversion can be considered. The first involves retaining the LFT and centralising the PFC rectifier functionality with a high-power SiC unit. This approach achieves an MVAC-LVDC conversion efficiency of approximately 98.2 % and an overall efficiency of around 97.9 %, with an estimated power density of about 0.25 kW/dm³. The second option employs robust 12-pulse rectifier systems complemented by active filters (AFs) to achieve power factor correction, forming a hybrid transformer. This partial-power-processing technique enables a high MVAC-LVDC conversion efficiency of approximately 98.5 % and an overall efficiency of about 98.2 %, with a power density estimated at 0.22 kW/dm³. Finally, solid-state transformers (SSTs) with medium-frequency transformers (MFTs) represent a fully controllable option. Current MVAC-LVDC SST prototypes have demonstrated full-load efficiencies of around 98 %, possibly reaching 98.5%, resulting in an overall efficiency of approximately 97.7 % or 98.2%. However, the power density of the overall SST system based on modular topologies tends to be comparatively lower than that of the hybrid transformer solution, despite the very high power density of the modules. #solidstate #powerelectronics #datacenters #lowvoltage #directcurrent #efficiency #powerdensity

  • View profile for Francesc Queralt

    Data center development, operation & compliance I AI Act Compliace I Author of “Why data defense matters” I Law & MBA LBS I FRdP Scholar

    6,644 followers

    Data Center Rack Architecture A high-performance data center rack is now an integrated system where each layer influences efficiency, availability, and power density. - It begins with power distribution, utilizing high-reliability rack PDUs equipped with per-phase and per-outlet metering, remote monitoring capabilities, and full A/B redundancy to support critical loads. In advanced environments, busway systems enhance scalability and flexibility without invasive changes. - Next is structured cabling, which is designed to ensure low latency, maintain airflow integrity, and enable operational clarity. The physical separation between power and data is fundamental for reliability and maintainability. - The most critical layer is thermal management. In high-density settings, there is zero tolerance for hotspots. Racks must incorporate containment strategies, blanking panels, rear door heat exchangers, and be ready for liquid cooling integration, whether direct-to-chip or via CDU. - Monitoring and control are also key components. Sensors for temperature, humidity, current, and leak detection in liquid environments provide real-time visibility and enable predictive operation. Without data, there is no efficiency. - Internal physical organization significantly impacts maintenance and scalability. Rail systems, spacing, airflow management, and accessibility directly affect intervention time and operational risk. In high-density environments, the rack serves as a critical engineering unit where power distribution, thermal behavior, and dynamic load variability must be precisely balanced. When specified correctly, the rack can sustain dynamic load shifts without degradation, maintain thermal stability at peak demand, and allow for expansion without structural rework. Way more complex than what we installed in our solar farms at SOCIAL ENERGY. This level of engineering is essential for achieving consistent energy efficiency, operational predictability, and high availability. #datacenter #rackarchitecture #highdensity #liquidcooling #powerdistribution #efficiency #missioncritical #infrastructure

Explore categories