Scheduling in Kubernetes happens in various ways. Depending on the workload, you might need different algorithms like 𝗚𝗮𝗻𝗴 𝗦𝗰𝗵𝗲𝗱𝘂𝗹𝗶𝗻𝗴. Volcano, a CNCF project, supports this and can optimize complex workflows such as AI training, inference pipelines, and distributed data processing. 🚀 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗚𝗮𝗻𝗴 𝗦𝗰𝗵𝗲𝗱𝘂𝗹𝗶𝗻𝗴? Gang scheduling ensures all pods in a group ("gang") start simultaneously or none do. This prevents partial execution, which is critical for interdependent tasks like distributed training or multi-stage AI pipelines. Without it, a single delayed pod could stall an entire workflow, wasting resources. 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: In distributed AI training, if three worker pods are needed, Volcano’s gang scheduler waits until all 3 are available. If even one fails to schedule, the scheduler releases reserved resources to avoid cluster deadlocks. ⚡ 𝗪𝗵𝘆 𝗩𝗼𝗹𝗰𝗮𝗻𝗼? Volcano extends Kubernetes’ default scheduler to handle batch workloads and multi-pod dependencies. It’s ideal for: → AI/ML workflows (e.g., TensorFlow/PyTorch jobs). → Big Data processing (Spark, Flink). → High-performance computing (HPC). Key features: ✅ PodGroup orchestration: Treats multiple pods as a single schedulable unit. ✅ Fair-share resource allocation: Balances cluster resources across teams. ✅ Preemption/Reclaim: Prioritizes critical workloads without manual intervention. 🌟 𝗥𝗲𝗮𝗹-𝗪𝗼𝗿𝗹𝗱 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲 Imagine training a large language model (LLM) across 3 GPUs. With gang scheduling: → Volcano groups all worker pods into a PodGroup. → The scheduler reserves resources only when all 3 GPUs are available. → If a node fails, Volcano retries or releases resources instantly, avoiding idle clusters. This eliminates "resource hoarding" and ensures cost-efficient scaling for AI teams. #Kubernetes #mlops
Advanced Planning And Scheduling Systems
Explore top LinkedIn content from expert professionals.
-
-
Org charts may not sound like a hot topic. But, partly due to AI, we’re starting to see the strongest challenge to the traditional org chart since it was invented in 1854. That’s great news: the nature of management is fundamentally shifting. Historically, org charts were about lines of control. That was essential in 1854, when that org chart existed to manage the steam-powered New York & Erie Railroad. It kept trains running on time and prevented collisions. Telegraph systems to communicate orders often could send messages only one-way. HQ commanded; others obeyed. But our world is far more networked, fast-changing, and complex. While our communications systems have evolved for our current world (and helped to create it), org charts haven’t changed much. There are attempts to acknowledge the new environment with matrixed organizations and dotted lines, but they struggle – because they’re premised on an org approach that’s fundamentally unsuited to our time. What’s the alternative? In tech, cross-functional teams work together to address key issues. At Meta, for example, teams of engineers, marketers, researchers, and others combine to handle products like ad technology systems. They’re focused on the platform, not their reporting lines. People have specializations, to be sure, but they combine to focus collectively on issues. Capital One, a large US financial institutions, organizes in a similar way. It has cross-functional teams that own customer journeys. Procter & Gamble is embracing “outcome-oriented organizations,” again based on cross functional teams. A team for a new product, for instance, might involve specialists in many sub-disciplines. Critically, they use AI as a “cybernetic teammate” to help fill gaps where specialists may not be available. This means they don’t need to wait for people to come free in order to get moving. The result is a much faster and more flexible way of working. Microsoft advocates for creating “work charts” rather than org charts. They focus on breaking work into jobs to be done (a new use for the term and methodology), then figuring out what combination of disciplines can get those jobs done best. These approaches are more atomized by purpose than traditional org methods, but because they’re cross-functional they’re also networked from the get-go. You don’t have to wait for some massive corporate reorg to make this happen. The approaches apply within functions and small teams as well. They can be an informal way of working. It’s crazy that we’re managing today based on approaches from 1854. Better ways are now at hand.
-
If you talk to enough GTM operators and the RevOps leaders supporting them, you’ll hear the same frustration: “We fix everything upstream, and scheduling still finds a way to break.” A rep grabs the wrong calendar. A handoff gets messy. Enrichment lags. Ownership rules get ignored. And a qualified prospect sits in limbo or disappears entirely. Everyone feels the pain, yet nobody truly owns the fix. We solved routing. We solved scoring. We solved attribution. But scheduling (the moment with revenue on the line) stayed detached from the system designed to govern it. It looks tiny from the outside, but scheduling carries the load of the whole GTM engine. It’s where logic, data, timing, and fairness collide. Most tools don’t understand any of that. They treat booking a meeting as a click, not a system event. That gap is why I’ve been paying attention to what Default is launching today. Their new Chrome extension brings orchestration logic directly into Gmail, Salesforce, and the places reps live every day. Before a rep even sees the calendar, Default is already evaluating: — Multi-object routing — Enrichment waterfalls — Account hierarchies — Qualification rules — Fairness and load balancing — Booker attribution — SLAs and follow-up workflows Only then does it show time slots. The extension becomes a distributed front-end for RevOps, your logic follows the rep, not the other way around. ➡ Handoffs stay intact. ➡ Ownership stays accurate. ➡ Meeting workflows fire cleanly. ➡ Debugging becomes observable rather than guesswork. The meeting reflects the system, not rep improvisation. For operators, this moves us closer to something we’ve been chasing for years: a GTM engine that behaves the way it was actually designed. Who else is excited? #RevOps #MarketingOps #Scheduling #LeadRouting #DefaultPartner #GTM
-
Meetings cut in half. Escalations down 75%. No new tools required. A cross-functional marketing team at a major global retailer was drowning: only 22% thought their meetings were a good use of time, and just 39% understood the metrics they were being evaluated against. No calendar audit fixed it. What did? Getting their team working norms aligned, starting with cross-functional goals. With help from Sacha Connor at Virtual Work Insider, the team worked through five intensive 90-minute sessions over two months. Three focus areas made the difference: 🔹 Align goals before anything else. They mapped KPIs side by side and found one function's top priority barely registered for the other. They worked to get aligned, and shared understanding of team metrics went from 39% to 83%. 🔹 Clarify decision rights first. Designated points of contact absorbed a brutal 15:1 staffing ratio, without adding headcount. It also cut down on meetings ("where are we on X") and reduced escalations by 75%! 🔹 Create norms for communication. One rule on Teams: drop an eyeball emoji to acknowledge you've seen a message. Information-flow effectiveness jumped from 41% to 83%. As Sacha put it about Team Working Agreements: most companies put a toolkit on the intranet, maybe a couple teams download it, work through the logistics and call it done. It's not. Three-quarters of teams have never established formal norms. If you're about to layer AI on top of that foundation, you're building on sand. 👉 Full case study in today's newsletter, linked in comments What's actually standing in the way of your team doing this work? #Meetings #Management #AI
-
Spain’s 15-Minute Rule: Stability by Throttling? Five months after Spain’s April blackout, voltage surges still haunt the grid. The lights are steady, the dynamics aren’t. What Changed: ➤ Red Eléctrica now requires PV and wind plants ≥5 MW (connected since 2018) to slew dispatch set-points linearly over 15 min, each ramping at ≤10 %/min. ➤ Previously, the dispatch gate was 2 min, not a physical ramp, but the telemetry cycle for updating set-points in CECRE. ➤Inverter limits are unchanged, and legacy fleets remain exempt unless they opt in to the new cycle. Why? “To reduce sudden voltage fluctuations.” • Technically, quarter-hour dispatch updates smooth the reactive step seen by the TSO, giving OLTCs and shunt controls ~900 s of breathing room. • Economically, it’s a mixed bag: aFRR and FCR services (≤10 %/min) are unaffected, but manual balancing and schedule deviations lose speed, trimming adjustment-market rent. The Context: Spain’s grid still shows overvoltage oscillations, even in “enhanced operating mode.” During the April 28 blackout, voltages swung ±7.5 kV each side (≈ ±15 kV peak-to-peak on the 400 kV bus), and after the Granada 400/220 kV transformer trip, levels hit ~1.10 pu (≈ 440 kV, +10 %), enough to trigger multiple protections. Operators re-meshed the network and disconnected reactors to damp a 0.63 Hz oscillation. The fix lowered impedance but also removed VAR sinks, heightening voltage sensitivity. Five months later, the same fragility remains. REE’s answer: slow the grid’s reflexes. The Mechanism: Limiting dispatch ramps doesn’t fix voltage instability, it only slows reactive mismatch. It buys time, not stability. Like lowering the gain on a feedback loop instead of redesigning the amplifier, the grid becomes slower, safer, but less flexible. In the short term, it eases stress on reactive controls; in the long term, it risks dulling renewables’ role as balancing assets. The Trade-off: ➤ The new gate helps the operator manage voltage, but at the expense of agility for inverter-based plants. ➤ Balancing bids can still be placed every 15 min; the plant simply ramps to the new target at ≤10 %/min. ➤ The “loss” isn’t a ban but an opportunity cost, reduced responsiveness in a market that values speed. It’s operationally understandable, but it highlights a deeper issue: A grid still short of dynamic VAR absorption, the kind provided by synchronous condensers or advanced grid-forming inverters, and real-time coordination between grid-forming and grid-following assets. The Real Question: 👉 Voltage control by dispatch gating isn’t resilience, it’s a timeout. Until grids gain real-time reactive visibility, adaptive damping, and coordinated voltage control, we’ll keep slowing renewables instead of strengthening systems. 👉 The hardest part of the transition isn’t generation. it’s control. #SpainBlackout #VoltageStability #GridResilience #IBR #GridForming #Renewables #RideThrough #synchronousCondenser
-
Early morning (my time...) long form musings. 💵 Revenue stalls when teams stay in their lanes. Collaboration is the only way to win. Too many companies treat Product Marketing like a content factory, looking to them to crank out decks, one-pagers, and flashy campaigns - without asking a very important question -> Will this actually help sales convert? We don't want to be talking about features, or just running campaigns randomly to create buzz. Or - leave our Sales team to battle buyer objections without support. We all win when these these groups (and others) collaborate early and often, making sure we align around *outcomes* buyers are interested in. That means creating messaging that isn't about us, it's about our prospects and talks to their pain, their needs, their goals. We can't afford to leave Sales guessing how to translate features into business outcomes. We don't want our partners in Product Management frustrated because their vision gets watered down. We gotta talk, people! If we collaborate, sales enablement becomes a growth engine, not an afterthought and conversion rates increase instead of pipelines stalling. Cross-functional engagement isn’t just “nice to have.” It’s how we help our companies turn messaging into revenue. That means: Building messaging that connects outcomes to buyer pain, not specs to features. Partnering with sales before a launch to arm them with tools, stories, and training that shorten the sales cycle instead of slowing it. Making enablement a culture, not an afterthought. If Product Marketing is doing its job, sellers don’t just get collateral. They get clarity, confidence, and conversations that convert. Alignment isn’t optional. It’s revenue. #productmarketing #outcomefocus #salesenablement
-
𝗛𝗼𝘄 𝘁𝗼 𝗕𝗿𝗲𝗮𝗸 𝗗𝗼𝘄𝗻 𝗦𝗶𝗹𝗼𝘀 𝗶𝗻 𝗠𝗲𝗱𝗧𝗲𝗰𝗵 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁: (𝗖𝗿𝗲𝗮𝘁𝗶𝗻𝗴 𝗰𝗿𝗼𝘀𝘀-𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗵𝗮𝗿𝗺𝗼𝗻𝘆 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝘁𝗵𝗲 𝗵𝗲𝗮𝗱𝗮𝗰𝗵𝗲𝘀) Ever notice how Quality, R&D, Regulatory and Marketing teams seem to speak completely different languages? This disconnect isn't just frustrating, it's costing your medical device company time, money, and potentially regulatory approval In my personal experience, I've seen how departmental friction can derail even the most promising innovations 𝗧𝗵𝗲 𝗥𝗲𝗮𝗹 𝗖𝗼𝘀𝘁 𝗼𝗳 𝗦𝗶𝗹𝗼𝘀 👉 Delayed submissions and market entry 👉 Regulatory surprises late in development 👉 Documentation rework and compliance gaps 👉 Increased development costs 👉 Team frustration and burnout Here's how to create seamless collaboration across your MedTech organization: 𝗦𝘁𝗲𝗽 𝟭: 𝗘𝘀𝘁𝗮𝗯𝗹𝗶𝘀𝗵 𝗖𝗿𝗼𝘀𝘀-𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 Create a development council with representatives from Quality, Regulatory, R&D, Manufacturing, Marketing and Clinical. Meet bi-weekly with a structured agenda (top tip keep the minutes to use towards management reviews). 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: A Class II device manufacturer implemented this model and reduced their development timeline by 30%, if not more, by identifying regulatory concerns during concept phase rather than pre-submission. 𝗦𝘁𝗲𝗽 𝟮: 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁 𝗦𝘁𝗮𝗴𝗲-𝗚𝗮𝘁𝗲 𝗥𝗲𝘃𝗶𝗲𝘄𝘀 𝘄𝗶𝘁𝗵 𝗔𝗹𝗹 𝗦𝘁𝗮𝗸𝗲𝗵𝗼𝗹𝗱𝗲𝗿𝘀 Don't move to the next development phase without formal sign-off from every department. This prevents costly backtracking 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: During a stage-gate review (Design Review), a clinical specialist identified that the intended claims presented by the regulatory team would require further clinical data. By catching this early, the company adjusted their development plan rather than facing a surprise 6-month+ delay come submission time 𝗦𝘁𝗲𝗽 𝟯: 𝗖𝗿𝗲𝗮𝘁𝗲 𝗮 𝗦𝗵𝗮𝗿𝗲𝗱 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 Develop a glossary of terms that bridges departmental jargon. This prevents miscommunication that leads to rework. 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: One client I worked with created a “MedTech Translation Guide” with input from each department. Not only did it reduce confusion, but it also built mutual respect engineers finally understood what the regulatory team meant by “intended use” and marketers stopped using terms that could trigger a knock on the door by Competent Authorities 𝗧𝗵𝗲 𝗕𝗼𝘁𝘁𝗼𝗺 𝗟𝗶𝗻𝗲? When this is done right, it accelerates development, strengthens compliance, and builds a more engaged team ✅ Faster to market ✅ Fewer compliance surprises ✅ Less internal friction If you're building your next-gen device and struggling with internal disconnects, it’s time to rethink how your teams work 𝘵𝘰𝘨𝘦𝘵𝘩𝘦𝘳 💬 I'd love to hear: How does your team keep cross-functional collaboration on track? #MedTech #MedicalDevice #ProductDevelopment
-
You're sitting in an L5-level system design interview at Google, and you've just been told to design a distributed job scheduler. You’ve done job schedulers before. Great. But it only takes one extra constraint to turn something “simple” into a headache: → Suppose they add DAG-based execution and now you’re managing dependency ordering → Suppose they add millions of jobs/day and suddenly your scheduler table must survive hell → Suppose they add multi-level executors (cheap vs expensive hardware) and now you’re in OS-level scheduling territory Before you know it, your “simple scheduler” becomes a mini Airflow + Cron + Kafka hybrid. Here’s my personal checklist of 15 things you must get right when designing a distributed job scheduler: 1. Store binaries in object storage Never ship code through your backend. Users upload binaries/scripts → you store them in S3/GCS → executors download directly. 2. Separate Cron jobs and DAG jobs Cron needs predictable time-based triggering. DAGs need dependency resolution + epoch tracking. Do NOT mix both in one table. 3. Topologically sort DAGs on upload Users will dump random graphs. You must determine roots, order, and execution sequence. 4. Pre-schedule only the next Cron run Not all future runs. Only the *upcoming* job instance goes into the scheduler table. 5. Each job must have a “run_at” timestamp Schedulers poll: `SELECT * FROM tasks WHERE run_at <= NOW() AND status = 'pending'` 6. Update run_at as soon as execution starts Add +5 or +10 min. This prevents retry storms and ensures clean scheduling timeouts. 7. Executors pull, not receive pushed tasks Pulling avoids overload, simplifies horizontal scaling, and prevents blind pushes. 8. Use an in-memory message broker for load balancing Kafka = bad for job schedulers (partition lock-in). ActiveMQ/RabbitMQ = executors pick tasks only when idle. 9. Use multi-level priority queues Think OS scheduling: Level 1 → cheap nodes Level 2 → standard Level 3 → high-power nodes Long-running tasks get escalated. 10. Use distributed locks for “run once” semantics Zookeeper lock per job ID → prevents simultaneous execution on multiple executors. 11. Accept that some jobs may run twice Make jobs idempotent. Use versioned writes. Retry logic will inevitably double-fire something. 12. Maintain a status table with final outcomes Users should see: pending, running, success, failed, error logs. 13. Use read replicas for user-facing status Never let users hit the primary scheduler DB. 14. Shard scheduler table by job_id + time range Millions of rows. High churn. Without sharding, your entire system becomes a single-point bottleneck. 15. Use change-data-capture (CDC) instead of 2-phase commits When DAG nodes complete → update DAG table → emit CDC event → enqueue next node. No locking hell. No cross-table multi-row transactions.
-
📢 Report Release: A Stability Mechanism for India’s Carbon Market 🇮🇳 India’s upcoming Carbon Credit Trading Scheme (#CCTS) marks a major step in aligning industrial growth with climate ambition. But as global experience shows, even well-designed markets can falter without the right stability mechanisms to keep carbon prices credible and investment worthy. That’s where our new Institute for Energy Economics and Financial Analysis (IEEFA) report with the Environmental Defense Fund (EDF) comes in. Co-authored by me, Saurabh Trivedi, PhD, and Saloni Sachdeva Michael, and developed with guidance from Suzi Kerr, Pedro Martins Barata, and István Bart, the study proposes a stability mechanism or Price or Supply Adjustment Mechanism (#PSAM) - a fiscally responsible, legally sound, and administratively streamlined framework tailored to India’s baseline-and-credit design. 🔍 Why not simply an #MSR? In cap-and-trade systems, regulators can adjust supply directly through allowance auctions. But in India’s intensity-based baseline and credit system, credits are issued after performance verification, making supply management more nuanced. Our PSAM adapts to this reality, creating flexibility without discretion, and stability without fiscal cost. Our proposed mechanism combines three complementary elements: ⚙️ Consignment auctions to enable transparent, regulator-linked price discovery even in thin markets, with credits forming a rule-based reserve for future stability 📅 Vintage-based credit rules to prevent surpluses from spilling across cycles, gradually phasing out old credits while preserving ownership 📈 A price corridor (already embedded in the CCTS) to guide timely, rule-based interventions when prices deviate from expected levels Together, these tools create a stability framework that sustains credible carbon pricing, supports industrial efficiency, and reduces regulatory uncertainty by shifting interventions from ad hoc decisions to predictable, rule-based adjustments. The mechanism also preserves inter-temporal flexibility, allowing supply modulation over time all within India’s legal and regulatory architecture. 🌿 Because the value of a carbon credit lies not just in owning it, but in knowing it holds its worth 📘 To know more, please refer to the report link in the comments. 💬 Feel free to reach out to me for any questions, discussions, or collaborations: more to come soon! :) Institute for Energy Economics and Financial Analysis (IEEFA) | Environmental Defense Fund | Tarun Sharma Manjusha Mukherjee | Shuchi Malhotra | Rashi T. | Janhvi Saini | Shane Brady #ICM #CarbonMarkets #India #ClimateFinance
-
How #BESS Provides Frequency and Voltage Support 1. #Frequency Support by BESS Frequency regulation involves maintaining the grid frequency within a specified range (e.g., 50 Hz in India) by balancing power supply and demand. Key Mechanisms 1. Active Power Response Primary Frequency Control (Inertia Emulation): BESS responds instantly to frequency deviations by injecting or absorbing active power. This emulates the inertial response of conventional generators. Secondary Frequency Control: BESS adjusts power output to restore grid frequency to its nominal value after disturbances. Tertiary Frequency Control: Long-term adjustment by BESS to support frequency over extended periods. 2. Fast Frequency Response (#FFR) BESS can detect frequency deviations in milliseconds and deliver power almost instantaneously. Example: Counteracting frequency drops caused by sudden load surges or generation losses. 3. Frequency Droop Control BESS follows a droop characteristic, where the output power is proportional to the frequency deviation. For instance, if the grid frequency drops, BESS increases active power output, and vice versa. 4. Grid-Forming Capability Advanced BESS systems can establish and maintain grid frequency in isolated or weak grids. They act as virtual synchronous machines, providing synthetic inertia. --- 2. Voltage Support by #BESS Voltage support involves maintaining grid voltage within acceptable limits to ensure power quality and stability. Key Mechanisms 1. Reactive Power Compensation BESS supplies or absorbs reactive power (measured in VARs) to regulate voltage levels: If voltage is too high, BESS absorbs reactive power. If voltage is too low, BESS supplies reactive power. 2. Volt-VAR Control BESS dynamically adjusts reactive power output based on real-time voltage measurements. A Volt-VAR curve defines the relationship between voltage and reactive power output. 3. Dynamic Voltage Regulation BESS stabilizes voltage during transient disturbances, such as faults or sudden load changes. 4. Grid Support in Weak Systems In grids with limited reactive power sources, BESS can compensate for voltage drops due to long transmission lines or high renewable penetration. 5. Voltage Droop Control Similar to frequency droop, BESS adjusts reactive power output in response to voltage changes, ensuring local stability. 6. #Harmonic Filtering BESS inverters can reduce voltage distortion by filtering out harmonics, improving power quality. 3. Integration of Frequency and Voltage Support Modern BESS systems are equipped with power electronics and advanced controls to simultaneously provide both frequency and voltage support: 1. Active and Reactive Power Decoupling: BESS can independently manage active power (for frequency) and reactive power (for voltage). 2. Power Conversion Systems (#PCS): Advanced inverters enable fast switching between active and reactive power delivery.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development