While everyone is talking about PSD3 and its implications, Europe has chosen to make its transition from Open Banking to Open #finance through a separate path, the so-called FIDA. Let’s take a look. Together with the UK, Europe has been a global frontrunner in introducing rules that facilitate #openbanking. It did so through the famous PSD2, a directive proposed as early as 2013 and implemented in 2018, which regulated and democratized the access to data so that #innovation can be built on top as an additional layer. In practice this meant giving third parties access to the (account) data held by banks via modern web rails called APIs. The next and more mature phase of this journey is open finance, which is set to unify different and more complex use cases – savings, lending, investments, insurance, etc – under one digital dashboard with hyper-customized services powered by open, rich, actionable data. #data is by far the most significant element in this transition and transformation process and that’s why the new framework is called Financial Data Access. Here are the main elements of FIDA: ― Consumers will be able to securely share their data (with banks, fintechs, etc) ― Nearly all financial services data will be within its scope – main difference with PSD2 ― Data is both data supplied by the customer and data stemming from customer interactions ― Unlike PSD2 & PSD3, which only apply to banking institutions offering online-accessible accounts, FIDA extends its scope to include institutions across the entire industry ― Firms can play a dual role - data holder and data user ― Data holders will be able to ask for reasonable compensation for making data accessible to data users ― Data users will have "read access", but will not be able to initiate transactions on behalf of customer ― Customers will be in full control of their data (remission dashboard to view and manage access) FIDA is important because of two key novelties: 1) it enables customers to control the use of their financial data 2) it allows financial institutions to charge other service providers (i.e. #fintech players) for data access granted by customers, which was always a big point missing from PSD2 (and one of the main disincentives of banks to invest in open banking). FIDA will take time: it’s still a draft and is not likely to be finalized before 2025. Then implementation timelines kick-in as well. And there is an interdependency with other EU projects (i.e. PSD3, EPI, the digital Euro). But it has the potential to completely change the EU finance landscape. Time will tell. What do you think? Opinions: my own, Graphic sources: WhiteSight, Mod5r, European Commission
The Role of Data in Business
Explore top LinkedIn content from expert professionals.
-
-
me: “do you have a data strategy?” VP Data: “of course, we use Snowflake, Fivetran, and dbt” This is something I see too often. Way too often. Data teams who think strategy is a list of tools. It’s not. A good data strategy should at least include: 1. Business Purpose - how are you going to use data to create value for the business? 2. Data Governance and Quality - how do you protect the data and its integrity over time 3. Architecture and Infrastructure - how does your data platform work and integrates with other systems 4. People and Processes - team makeup, operating model, etc. 5. Success KPIs - what do you measure to ensure your data is driving value and is in a good state What else would you include? ♻️ Repost to help data leaders in your network
-
The Evolution of Data Architectures: From Warehouses to Meshes As data continues to grow exponentially, our approaches to storing, managing, and extracting value from it have evolved. Let's revisit four key data architectures: 1. Data Warehouse • Structured, schema-on-write approach • Optimized for fast querying and analysis • Excellent for consistent reporting • Less flexible for unstructured data • Can be expensive to scale Best For: Organizations with well-defined reporting needs and structured data sources. 2. Data Lake • Schema-on-read approach • Stores raw data in native format • Highly scalable and flexible • Supports diverse data types • Can become a "data swamp" without proper governance Best For: Organizations dealing with diverse data types and volumes, focusing on data science and advanced analytics. 3. Data Lakehouse • Hybrid of warehouse and lake • Supports both SQL analytics and machine learning • Unified platform for various data workloads • Better performance than traditional data lakes • Relatively new concept with evolving best practices Best For: Organizations looking to consolidate their data platforms while supporting diverse use cases. 4. Data Mesh • Decentralized, domain-oriented data ownership • Treats data as a product • Emphasizes self-serve infrastructure and federated governance • Aligns data management with organizational structure • Requires significant organizational changes Best For: Large enterprises with diverse business domains and a need for agile, scalable data management. Choosing the Right Architecture: Consider factors like: - Data volume, variety, and velocity - Organizational structure and culture - Analytical and operational requirements - Existing technology stack and skills Modern data strategies often involve a combination of these approaches. The key is aligning your data architecture with your organization's goals, culture, and technical capabilities. As data professionals, understanding these architectures, their evolution, and applicability to different scenarios is crucial. What's your experience with these data architectures? Have you successfully implemented or transitioned between them? Share your insights and let's discuss the future of data management!
-
📌 The 3 Types of Dashboards This confusion shows up constantly in BI projects, usually much later than it should when stakeholders start saying things like: “𝘚𝘰… 𝘸𝘩𝘢𝘵 𝘢𝘮 𝘐 𝘴𝘶𝘱𝘱𝘰𝘴𝘦𝘥 𝘵𝘰 𝘥𝘰 𝘸𝘪𝘵𝘩 𝘵𝘩𝘪𝘴?” At that point, the issue is rarely the data model or the visuals themselves, but the fact that the dashboard was never designed around the kind of decision it was supposed to support. Today’s a good day for a quick reminder: Dashboards are not interchangeable. This means that different roles = different priorities = different dashboards. For simplification, let's classify them into 3 major categories. 1️⃣ 𝐄𝐱𝐞𝐜𝐮𝐭𝐢𝐯𝐞 𝐃𝐚𝐬𝐡𝐛𝐨𝐚𝐫𝐝𝐬 They are built for people who need to understand where the business is going and whether it is still on track. They tend to surface a small number of signals that summarize overall health, progress against objectives, and emerging risks. Data IS updated frequently enough to stay relevant but not so often that it distracts from long-term thinking. Because of that, these dashboards usually rely on aggregated KPIs and controlled views, and they work best when they help leadership align on direction rather than debate numbers. 2️⃣ 𝐀𝐧𝐚𝐥𝐲𝐭𝐢𝐜𝐚𝐥 𝐃𝐚𝐬𝐡𝐛𝐨𝐚𝐫𝐝𝐬 These dashboards are built to be explored, filtered, sliced, and challenged, often across long historical windows, because understanding patterns, seasonality, and anomalies takes context and depth. They are typically used by analysts, data scientists, or domain experts who expect to spend time inside the data, moving back and forth between views until the story becomes clear enough to support a decision. 3️⃣ 𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐚𝐬𝐡𝐛𝐨𝐚𝐫𝐝𝐬 They are designed for teams who need to see what is happening right now, or what just happened, and react immediately. The value of these dashboards comes from timeliness and clarity rather than completeness, which is why they often focus on live or near-real-time metrics, alerts, and very specific indicators tied to concrete actions. If looking at an operational dashboard does not clearly suggest what should be done next, it usually means it is trying to serve too many purposes at once. Now, one of the most common mistakes in BI is trying to build one single dashboard that does all of this, with the hope that filters and tabs will somehow make it work for everyone. In practice, this usually leads to frustration on all sides, because executives, analysts, and operational teams are not asking the same questions, even when they are looking at the same data. Good dashboard design starts much earlier than layout or tooling. It starts with being explicit about: → Who the dashboard is for → What kind of decision it is meant to support → How quickly that decision needs to be made. Everything else follows from there. 💾 Save this for your next BI discusion Repost ♻️ for others.
-
For years, as a knowledge graph practitioner, I kept hearing the same refrain: you don't need an ontology to do knowledge graphs. Too complicated. Unnecessary overhead. Just connect the data and move on. Now, mildly amusing, I'm encountering the reverse. An organisation realises it needs an ontology, and gets told by some: yes, you need an ontology - but not a knowledge graph. That part is too complicated. At the same time, Context Graph is now gaining traction as a term. It’s often positioned as a fresh idea, when in reality it rebrands knowledge graph principles. We’ve been here before - first with the term Semantic Web, then with Linked Data. Let me cut through all of this. 🔵 The Truth Is Simple To solve the data integration problem - to make your organisation's data AI-ready - you need two things. First you need to share meaning clearly: the abstract concepts, the definitions, the metadata that describes your world. That's an ontology. Second, you need to connect your data into a rich network of relationships. No fact lives in splendid isolation. Its value comes from how it relates to other facts. In any organisation of scale, this means a decentralised way of identifying and linking facts together. That's a graph - a vast, distributed graph. 🔵 These Are Not Separate Things They are one thing. You need to move seamlessly from individual facts up into the conceptual realm - to reason at the level of abstractions. Then you need to come back down from concepts into the world of facts - to ground that reasoning in reality. Put those together and you have a knowledge graph. The ontology without the graph is a map with no territory. The graph without the ontology is territory with no map. Neither works alone. 🔵 The Final Piece: Open Standards It's not enough to get your data AI-ready for today's task - enabling agents to work with your internal knowledge. You also need to prepare for what comes next. For organisations that successfully navigate this phase, the future is interoperability: AI marketplaces where agents, data, and meaning flow across boundaries. That future only works if what you build today is based on open standards. True open standards - from recognised bodies like the W3C, with wide adoption. Not proprietary formats dressed up as "open." Not vendor-specific schemas that lock you in. Only then can your AI-ready data seamlessly plug into the ecosystems of tomorrow. 🔵 The Bottom Line Don't let anyone split what should be whole. Ontology and graph are two aspects of the same solution. Meaning and connection. Abstraction and grounding. You need both. And you need them built on standards that will outlast any single vendor's roadmap. That's not complexity. That's clarity. ⭕ What is a Knowledge Graph: https://lnkd.in/eFgDfjRQ
-
Data jobs didn’t disappear — the value did. A decade ago, Harvard Business Review called the Data Scientist “the sexiest job of the 21st century.” Everyone rushed in — bootcamps, certificates, “transition to data” programs exploded. Fast forward: hiring freezes, layoffs, disillusionment. What happened? Most data teams failed to deliver business value. -They built dashboards that no one used. -Models that never left Jupyter notebooks. -Reports that didn’t drive decisions. As one study found, only ~32% of companies actually realize measurable value from data investments. The rest? Busywork disguised as insight. The hard truth: We trained a generation of “data tool users,” not business problem solvers. Here’s what the next generation of data professionals must do differently: 1. Define business problems first. If you can’t articulate the “why,” your model is useless. 2. Run experiments, deploy solutions, measure results. Rigor beats fancy titles. 3. Deliver outcomes, not outputs. Dashboards and models don’t matter — impact does. Stop chasing influencers and certificates. Start chasing value creation. In this market, the sexiest skill isn’t Python - it’s critical thinking. #datascience #business #analytics
-
Reporting is NOT delivering insights. Unfortunately, many data & analytics professionals think it is. Reporting dashboards show WHAT's happening and enable basic slicing and dicing, but fail to deliver WHY. Example - "Performance is down 15% WoW" This is just stating the obvious. It's not a real insight. It's not actionable. This leaves many business leaders frustrated. When business stakeholders ask for more dashboards, what they are ultimately trying to achieve is "I need to know what's impacting my key business metrics and what I should do to improve it". Adding 15 more charts/views/slices won't help much to understand what's impacting the key business metrics and which actions should be taken. The key to REAL INSIGHTS that can move the needle? ROOT-CAUSE ANALYSIS to find the WHY (i.e., DIAGNOSTIC analytics) This is the most effective way to drive change with data & analytics. This can make the data & analytics team a TRUSTED ADVISOR and get a seat at the leadership and decision-making table. Insights need to be: 🟢SPEEDY: business stakeholders need quick insights into performance changes to make decisions before it's too late 🟢PROACTIVE: don't wait for business stakeholders to ask. Monitor key metrics and proactively share insights to become that trusted advisor 🟢IMPACT-ORIENTED: focus on the key drivers that drove most of the change and communicate accordingly 🟢EFFECTIVELY COMMUNICATED to drive the right action #data #analytics #impact #diagnosticanalytics
-
It starts with one missing value, one duplicate row… and suddenly your entire system can’t be trusted. Because data issues don’t fail loudly. They compound silently. Here’s what keeps pipelines reliable 👇 - Null value checks Missing fields in key columns can quietly break logic and downstream outputs. - Duplicate checks Repeated records distort metrics, models, and business decisions. - Primary key validation Every record must be unique, or nothing stays consistent. - Referential integrity Broken relationships between tables lead to incorrect joins and insights. - Data type & format validation Wrong formats or types cause subtle but costly errors. - Range & outlier checks Values outside expected limits often signal deeper issues. - Freshness & volume checks Unexpected delays or spikes usually point to upstream failures. - Schema change detection Even small structural changes can break entire pipelines. - Distribution drift checks Data patterns shifting over time can silently degrade models. - Business rule validation If domain logic breaks, the output becomes unreliable. - Aggregation & historical checks Totals and trends must stay consistent across layers and over time. Data quality issues don’t crash systems. They corrupt them. What’s the one check your pipeline is missing right now? Follow Sumit Gupta for more such insights!!
-
Tools are the fashion; Data Modeling is the skeleton. You can swap Airflow for Prefect, or Spark for DuckDB. But you can’t swap "bad logic" for a faster engine and expect it to work. In one project, I used Airflow. In another, Spark. Lately, it’s all dbt. But 100% of the time, the win came down to Data Modeling fundamentals. Building a data platform without modeling is like building a skyscraper on a swamp. It doesn't matter how expensive your gold-plated elevators (tools) are if the foundation is sinking. Here's what actually matters: 𝗗𝗶𝗺𝗲𝗻𝘀𝗶𝗼𝗻𝗮𝗹 𝗠𝗼𝗱𝗲𝗹𝗶𝗻𝗴 = 𝗦𝗽𝗲𝗲𝗱 Star schemas make queries fast. Facts and dimensions separated = happy analysts. 𝗦𝗖𝗗𝘀 𝗪𝗶𝗹𝗹 𝗕𝗶𝘁𝗲 𝗬𝗼𝘂 Skip SCD Type 2 tracking? Debug why historical reports show wrong data at 2 AM. 𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗜𝘀𝗻'𝘁 𝗥𝗲𝗹𝗶𝗴𝗶𝗼𝗻 OLTP systems? Normalize for integrity. OLAP systems? Denormalize for speed. Know your world. Design accordingly. 𝗗𝗮𝘁𝗮 𝗩𝗮𝘂𝗹𝘁 = 𝗙𝗹𝗲𝘅𝗶𝗯𝗶𝗹𝗶𝘁𝘆 Business requirements changing weekly? Data Vault keeps you sane. Verbose but bulletproof. 👉 Here are the real Non-negotiables: • Model for how data will be queried, not just stored • Document your grain—ambiguity kills data trust • Surrogate keys > natural keys (trust me on this) • Test your model with real queries before building pipelines My 2 cents: Master data modeling, and every tool becomes easier. Skip it, and you'll spend your career firefighting broken pipelines. Are you willing to upskill❓Explore these resources: → Michael K.'s KahanDataSolutions - https://lnkd.in/g4JSFPph → Benjamin Rogojan's Seattle Data Guy - https://lnkd.in/ghewnvBX → The Data Warehouse Toolkit by Ralph Kimball - https://lnkd.in/dTynC6yD Image Credits: Shubham Srivastava Every pipeline you build will eventually be replaced. A solid data model? That becomes the language of the company. What's one data modeling mistake that cost you hours of debugging? Let's learn together. 👇
-
The unprecedented proliferation of data stands as a testament to human ingenuity and technological advancement. Every digital interaction, every transaction, and every online footprint contributes to this ever-growing ocean of data. The value embedded within this data is immense, capable of transforming industries, optimizing operations, and unlocking new avenues for growth. However, the true potential of data lies not just in its accumulation but in our ability to convert it into meaningful information and, subsequently, actionable insights. The challenge, therefore, is not in collecting more data but in understanding and interacting with it effectively. For companies looking to harness this potential, the key lies in asking the right questions. Here are three pieces of advice to guide your journey in leveraging data effectively: 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐲 𝟏: 𝐄𝐬𝐭𝐚𝐛𝐥𝐢𝐬𝐡 𝐆𝐨𝐚𝐥-𝐎𝐫𝐢𝐞𝐧𝐭𝐞𝐝 𝐐𝐮𝐞𝐫𝐢𝐞𝐬 • Tactic 1: Define specific, measurable objectives for each data analysis project. For instance, rather than a broad goal like "increase sales," aim for "identify factors that can increase sales in the 18-25 age group by 10% in the next quarter." • Tactic 2: Regularly review and adjust these objectives based on changing business needs and market trends to ensure your data queries remain relevant and targeted. 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐲 𝟐: 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐞 𝐂𝐫𝐨𝐬𝐬-𝐃𝐞𝐩𝐚𝐫𝐭𝐦𝐞𝐧𝐭𝐚𝐥 𝐈𝐧𝐬𝐢𝐠𝐡𝐭𝐬 • Tactic 1: Conduct regular interdepartmental meetings where different teams can present their data findings and insights. This practice encourages a holistic view of data and generates multifaceted questions. • Tactic 2: Implement a shared analytics platform where data from various departments can be accessed and analyzed collectively, facilitating a more comprehensive understanding of the business. 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐲 𝟑: 𝐀𝐩𝐩𝐥𝐲 𝐏𝐫𝐞𝐝𝐢𝐜𝐭𝐢𝐯𝐞 𝐀𝐧𝐚𝐥𝐲𝐭𝐢𝐜𝐬 • Tactic 1: Utilize machine learning models to analyze current and historical data to predict future trends and behaviors. For example, use customer purchase history to forecast future buying patterns. • Tactic 2: Regularly update and refine your predictive models with new data, and use these models to generate specific, forward-looking questions that can guide business strategy. By adopting these strategies and tactics, companies can move beyond the surface level of data interpretation and dive into deeper, more meaningful analytics. It's about transforming data from a static resource into a dynamic tool for future growth and innovation. ******************************************** • Follow #JeffWinterInsights to stay current on Industry 4.0 and other cool tech trends • Ring the 🔔 for notifications!
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development