I recently announced what may have sounded a little bold, or over hyped, project I've been working on... An Open Source RMM - Secure, Scalable, and Self-Hosted: Willow is Taking Shape. A week ago I shared the early stages of a project I’ve been quietly building: Willow - a minimal, secure connection broker built in Rust, designed as the foundation for a truly open-source RMM (Remote Monitoring and Management) platform. Now, it’s more than a proof of concept, and shaping into something that could genuinely change the #MSP landscape. What’s New This Week: * Script execution is stable; agents run PowerShell/CMD, return structured results, and log everything server-side. * Windows telemetry expanded: TPM status, last logged-on user, CPU, RAM, uptime, disks. * Remote shell access works: full bi-directional shell sessions via WebSocket, cleanly logged and auditable. * Multi-database support: SQLite (default, no config) MySQL, or PostgreSQL with a unified adapter layer. * Secure-by-default: Ed25519 challenge-response with no sessions, passwords, or stored secrets. Rust All the Way Down Both the Willow server and agent are built in Rust - fast, safe, and deeply concurrent. Key tools: * tokio + axum for async server design * WebSockets for agent communication * SQLx with pooled DB adapters * DashMap + tokio::mpsc for lock-free, async-safe routing Why it's a Game-Changer for MSPs and IT Teams? The remote management space has long been dominated by: * Expensive, opaque tools * Cloud relays and vendor lock-in * Unwanted telemetry and hidden dependencies Willow flips the script: * Fully self-hosted * No per-agent fees * No cloud required * No vendor lock-in * No data exfiltration - you control everything Smart Data Syncing: Lightweight, Efficient Inventory and dataset syncing is intelligent and minimal: * Agent collects structured JSON data: OS, disks, uptime, hardware * Each dataset is hashed * Only changed data is uploaded to the server This keeps bandwidth low and enables efficient inventory drift tracking - ideal for large deployments. Scales By Design Because Willow is async, stateless, and efficient, it scales predictably: * ~ 500 agents on 2-core VPS (SQLite) * ~2,000–5,000 agents on 4-core / 8GB (MySQL) * ~10,000+ agents on 8-core+ servers (PostgreSQL) Memory footprint: ~2–4 KB per connected agent Session routing: lock-free and lightning fast Control API: clean HTTP interface for dashboards or automation What’s Next * Admin auth + access control * Optional web UI for shell and inventory * Cross-platform agent inventory (Linux/Mac) * Horizontal scaling (multi-broker deployments) * Agent plugin system (custom telemetry/actions) This isn’t just a side project - it’s a growing foundation for a next-gen open RMM that puts control back in the hands of MSPs, IT teams, and developers. It’s fast, secure, and fully yours to host, fork, or extend. #RustLang #OpenSource #RMM #SelfHosted #MSP #CyberSecurity #DevOps #SystemsEngineering
Remote Infrastructure Management
Explore top LinkedIn content from expert professionals.
Summary
Remote infrastructure management means overseeing and controlling IT systems, networks, and devices from afar, which helps organizations keep their operations running smoothly—even in hard-to-reach locations. This approach is crucial for industries with distributed assets, because it enables real-time monitoring, quick troubleshooting, and secure access without the need for physical presence.
- Centralize oversight: Use dashboards and management tools to gather and review data from all remote sites, making it easier to spot issues and maintain consistency.
- Build resilience: Design your infrastructure with backup systems, secure connections, and rugged hardware to handle tough environments and minimize downtime.
- Streamline remote access: Set up secure gateways or management platforms so technicians can troubleshoot and configure devices without traveling to every site.
-
-
OT Asset Management under NIST 1800-23 >> NIST 1800-23: Energy Sector Asset Management (ESAM) delivers a blueprint for visibility, control, and resilience across electric utilities, oil & gas, and other critical infrastructure sectors. >>> This project addresses the following characteristics of asset management: > Asset Discovery: establishment of a full baseline of physical and logical locations of assets > Asset Identification: capture of asset attributes, such as manufacturer, model, OS, IP addresses, MAC addresses, protocols, patch-level information, and firmware versions > Asset Visibility: continuous identification of newly connected or disconnected devices and IP and serial connections to other devices > Asset Disposition: the level of criticality (high, medium, or low) of a particular asset, its relation to other assets within the OT network, and its communication with other devices > Alerting Capabilities: detection of a deviation from the expected operation of assets >>> A standardized architecture allows organizations to replicate deployments across sites while tailoring to local needs, ensuring both scalability and security. > At each remote site, control systems generate raw ICS data and protocol traffic (Modbus, DNP3, EtherNet/IP), which is collected by local data servers. > These servers act as the secure bridge, encapsulating serial traffic and transmitting structured data through VPN tunnels back to the enterprise. > Once in the enterprise environment, asset management tools aggregate inputs from multiple sites, giving analysts a single source of truth. > Events and asset health indicators are displayed on centralized dashboards, enabling timely detection of anomalies, vulnerabilities, or misconfigurations. > Importantly, remote management is limited only to the data servers, ensuring that core control systems remain shielded from unnecessary exposure. >>> Here’s a 10-point summary of the ESAM reference design asset management system: > Data Collection – Gathers raw packet captures and structured data from OT networks. > Remote Configuration – Allows secure management and policy-driven data ingestion. > Data Aggregation – Centralizes collected data for further processing. > Monitoring – Continuously observes network activity for anomalies. > Discovery – Detects new devices when new IP/MAC addresses appear. > Data Analysis – Normalizes multi-site traffic into one view and establishes baselines of normal behavior. > Device Recognition – Identifies devices via MAC addresses or deep packet inspection (model/serial). > Device Classification – Assigns criticality levels automatically or manually. > Data Visualization – Displays collected and analyzed information in a centralized dashboard. > Alerting & Reporting – Notifies analysts of abnormal events and generates reports, including patch availability. #icssecurity #OTsecurity
-
Simplify Network Management with Cisco Meraki Switches This technical paper by Cisco Systems, Inc. outlines 10 powerful ways Meraki switches streamline enterprise network operations - from zero-touch deployment to intelligent cloud management. Key highlights include: • Centralized port management from a single dashboard • Remote cable testing and live packet captures • Rogue DHCP detection and port-based access control • Seamless firmware updates with no manual intervention • Built-in analytics for identifying bandwidth and power usage trends This document is an excellent resource for professionals looking to enhance efficiency, visibility, and security in their switching infrastructure through Meraki’s cloud-managed approach. Author: Cisco Systems, Inc. Highly recommended reading for network engineers, architects, and IT managers seeking next-generation switch management simplicity. What’s your experience with Meraki switches in large-scale deployments? Do you find cloud-based management a net benefit in enterprise environments? Let’s discuss #Cisco #Meraki #NetworkAutomation #Switching #CloudNetworking #NetworkManagement #ITInfrastructure
-
🌍 Remote operations shouldn’t mean reactive operations. From pump stations to solar fields, remote industrial sites face tough conditions: limited connectivity, harsh environments, and minimal on-site staff. That’s why reliable SCADA integration strategies aren’t optional—they’re essential. In our latest blog, we explore 7 field-tested strategies for resilient SCADA at remote sites, including: ✔️ Choosing the right communication architecture (cellular, radio, satellite, hybrid) ✔️ Designing for redundancy and uptime ✔️ Simplifying HMIs for low-bandwidth environments ✔️ Selecting ruggedized hardware that survives harsh conditions ✔️ Leveraging cloud platforms for mobile access and analytics ✔️ Managing bandwidth with smart data handling ✔️ Building in cybersecurity from the start 💡 Whether you're managing water infrastructure, pipeline assets, or energy sites, success begins with the right SCADA strategy—tailored for your site conditions, not a one-size-fits-all approach. 🔗 Read the full blog: 👉 https://zurl.co/FS5En At Atlas OT, we design SCADA systems that perform where it matters most: in the field, under pressure, and with minimal intervention. #AtlasOT #SCADA #RemoteOperations #IndustrialAutomation #ControlSystems #WaterInfrastructure #EnergySector #Cybersecurity #IIoT #CriticalInfrastructure #OilandGas
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development