Showing posts with label information governance. Show all posts
Showing posts with label information governance. Show all posts

Daily Tech Digest - June 06, 2024

How AI will kill the smartphone

The great thing about AI is that it’s software-upgradable. When you buy an AI phone, the phone gets better mainly through software updates, not hardware updates. ... As we’re talking back and forth with AI agents, people will use earbuds and, increasingly, AI glasses to interact with AI chatbots. The glasses will use built-in cameras for photo and video multimodal AI input. As glasses become the main interface, the user experience will likely improve more with better glasses (not better phones), with improved light engines, speakers, microphones, batteries, lenses, and antennas. With the inevitable and inexorable miniaturization of everything, eventually a new class of AI glasses will emerge that won’t need wireless tethering to a smartphone at all, and will contain all the elements of a smartphone in the glasses themselves. ... Glasses will prove to be the winning device, because glasses can position speakers within an inch of the ears, hands-free microphones within four inches of the mouth and, the best part, screens directly in front of the eyes. Glasses can be worn all day, every day, without anything physically in the ear canal. In fact, roughly 4 billion people already wear glasses every day.


Million Dollar Lines of Code - An Engineering Perspective on Cloud Cost Optimization

Storage is still cheap. We should really still be thinking about storage as being pretty cheap. Calling APIs costs money. It's always going to cost money. In fact, you should accept that anything you do in the cloud costs money. It might not be a lot; it might be a few pennies. It might be a few fractions of pennies, but it costs money. It would be best to consider that before you call an API. The cloud has given us practically infinite scale, however, I have not yet found an infinite wallet. We have a system design constraint that no one seems to be focusing on during design, development, and deployment. What's the important takeaway from this? Should we now layer one more thing on top of what it means to be a software developer in the cloud these days? I've been thinking about this for a long time, but the idea of adding one more thing to worry about sounds pretty painful. Do we want all of our engineers agonizing over the cost of their code? Even in this new cloud world, the following quote from Donald Knuth is as true as ever.


The five-stage journey organizations take to achieve AI maturity

We are far from seeing most organizations fully versed in and comfortable with AI as part of their company strategy. However, Asana and Anthropic have outlined five stages of AI maturity; a guide executives can use to gauge where their company stands in implementing real transformative outcomes. Many respondents say they’re in either the first or second stage. Only seven percent claim they’ve achieved the highest stage. ... Asana and Anthropic conclude that boosting comprehension is important, offering resources, training programs and support structures for knowledge workers to improve their education. Companies must also prioritize AI safety and reliability, meaning that AI vendors should be selected with “complete, integrated data models and invest in high-quality data pipelines and robust governance practices.” AI responses must be interpretable to facilitate decision-making and should always be controlled and directed by human operators. Other elements of organizations in Stage 5 include embracing a human-centered approach, developing strong comprehensive policies and principles to navigate AI adoption responsibly, and being able to measure AI’s impact and value


Unauthorized AI is eating your company data, thanks to your employees

A major problem with shadow AI is that users don’t read the privacy policy or terms of use before shoveling company data into unauthorized tools, she says. “Where that data goes, how it’s being stored, and what it may be used for in the future is still not very transparent,” she says. “What most everyday business users don’t necessarily understand is that these open AI technologies, the ones from a whole host of different companies that you can use in your browser, actually feed themselves off of the data that they’re ingesting.” ... Using AI, even officially licensed ones, means organizations need to have good data management practices in place, Simberkoff adds. An organization’s access controls need to limit employees from seeing sensitive information not necessary for them to do their jobs, she says, and longstanding security and privacy best practices still apply in the age of AI. Rolling out an AI, with its constant ingestion of data, is a stress test of a company’s security and privacy plans, she says. “This has become my mantra: AI is either the best friend or the worst enemy of a security or privacy officer,” she adds. “It really does drive home everything that has been a best practice for 20 years.”


How a data exchange platform eases data integration

As our software-powered world becomes more and more data-driven, unlocking and unblocking the coming decades of innovation hinges on data: how we collect it, exchange it, consolidate it, and use it. In a way, the speed, ease, and accuracy of data exchange has become the new Moore’s law. Safely and efficiently importing a myriad of data file types from thousands or even millions of different unmanaged external sources is a pervasive, growing problem. ... Data exchange and import solutions are designed to work seamlessly alongside traditional integration solutions. ETL tools integrate structured systems and databases and manage the ongoing transfer and synchronization of data records between these systems. Adding a solution for data-file exchange next to an ETL tool enables teams to facilitate the seamless import and exchange of variable unmanaged data files. The data exchange and ETL systems can be implemented on separate, independent, and parallel tracks, or so that the data-file exchange solution feeds the restructured, cleaned, and validated data into the ETL tool for further consolidation in downstream enterprise systems.


AI is used to detect threats by rapidly generating data that mimics realistic cyber threats

When we talk about AI, it’s essential to understand its fundamental workings—it operates based on the data it’s fed. Hence, the data input is crucial; it needs to be properly curated. Firstly, ensuring anonymisation is key; live customer data should never be directly integrated into the model to comply with regulatory standards. Secondly, regulatory compliance is paramount. We must ensure that the data we feed into the framework adheres to all relevant regulations. Lastly, many organisations grapple with outdated legacy tech stacks. It’s essential to modernise and streamline these systems to align with the requirements of contemporary AI technology. Also, mitigating bias in AI is crucial. Since the data we use is created by humans, biases can inadvertently seep into the algorithms. Addressing this issue requires careful consideration and proactive measures to ensure fairness and impartiality. ... It’s important for people to be highly aware of biases and misconceptions surrounding AI. We need to be conscious of the potential biases in AI systems. 


Tackling Information Overload in the Age of AI

The reason this story is so universal is that the kind of information that drives knowledge-intensive workflows is unstructured data, which has stubbornly resisted the automation wave that has taken on so many other enterprise workflows using software and software-as-a-service (SaaS). SaaS has empowered teams with tools they can use to efficiently manage a wide variety of workflows involving structured data. However, SaaS offerings have been unable to take on the core “jobs to be done” in the knowledge-intensive enterprise because they can’t read and understand unstructured data. They aren’t capable of performing human-like services with autonomous decision-making abilities. As a result, knowledge workers are still stuck doing a lot of monotonous and undifferentiated data work. However, newly available large language models (LLMs) and generative AI excel at processing and extracting meaning from unstructured data. LLM-powered “AI agents” can perform services such as reading and summarizing content and prioritizing work and can automate multistage knowledge workflows autonomously.


CDOs Should Understand Business Strategy to Be Outcome-focused

To be outcome-focused, the CDO has to prioritize understanding the business or corporate strategy, he says. In addition, one needs to comprehend the organizational aspirations and how to deliver on key business outcomes, which could include monetization of commercial opportunities, risk mitigation, cost savings, or providing client value. Next, Thakur advises leaders to focus on the foundational data and analytic capabilities to drive business outcomes. There must be a well-organized data and analytic strategy to start with, a good tech stack, an analytic environment, data management, and governance principles. While delivering on some of the use cases may take time, it is imperative to have quick wins along the way, says Thakur. He recommends CDOs create reusable data products and assets while having an agile operationalization process. Then, Thakur suggests data leaders create a solid engagement model to ensure that the data analytics team is in sync with business and product owners. He urges leaders to put an effective ideation and opportunity management framework into action to capture business ideas and prioritize use cases.


Besides the traditional functions of sales and finance, there is a growing demand for tech-driven talent in the sector. It’s important to note that the demand for technology expertise isn’t limited to software development but encompasses different competencies, such as cybersecurity, UI/UX development, AI/ML engineering, digital marketing and data analytics. This is expected as there has been an increase in the use of AI and ML in the BFSI landscape, most prominently in fraud detection, KYC verification, sales and marketing processes. ... Now, new-age competencies such as digital skills, data analysis, AI and cybersecurity are increasingly becoming part of these programmes. To meet the growing demand for specialised skills and roles, many BFSI organisations encourage employees with financial expertise to develop digital skills that enable them to work more efficiently. ... At the crossroads of significant industry-level transformations, employers expect a variety of soft skills in addition to technical competencies. 


Cyber Resilience Act Bans Products with Known Vulnerabilities

In future, manufacturers will no longer be allowed to place smart products with known security vulnerabilities on the EU market – if they do, they could face severe penalties ... When it comes to cyber resilience, the legislation of the Cyber Resilience Act makes it clear that customers – both residential and commercial – have an effective right to secure software. However, the race to be the first to discover vulnerabilities continues: organisations would be well advised to implement both effective CVE detection and impact assessment now to better scrutinise their own products and protect themselves against the serious consequences of vulnerability scenarios. “The CRA requires all vendors to perform mandatory testing, monitoring and documentation of the cybersecurity of their products, including testing for unknown vulnerabilities known as ‘zero days’,” said Jan Wendenburg, CEO of ONEKEY, a cybersecurity company based in Duesseldorf, Germany. ... Many manufacturers and distributors are not sufficiently aware of potential vulnerabilities in their own products. 



Quote for the day:

"Life always begins with one step outside of your comfort zone." -- Shannon L. Alder

Daily Tech Digest - February 13, 2024

Advanced Microsegmentation Strategies for IT Leaders

Microsegmentation, and network segmentation in general, is a 50-year-old cybersecurity strategy that “involves dividing a network into smaller zones to enhance security by restricting the movement of a threat to an isolated segment rather than to the whole network,” says Guy Pearce, a member of the ISACA Emerging Trends Working Group. ... Moyle says that any segmentation (micro or otherwise) can be “part of a security strategy based on use case, architecture and other factors.” He notes that microsegmentation itself isn’t an end goal for security, and that IT leaders should instead see it as “a mechanism that’s part of a broader holistic strategy.” That said, many factors go into a successful microsegmentation implementation, namely careful planning. Microsegmentation goes hand in hand with setting up granular security policies. It also relies on continuous monitoring, evaluation and user education awareness, Pearce says. Successful microsegmentation also requires automation, incident response orchestration and cross-team collaboration. None of that is sustainable without a solid, well-maintained network architecture map. 


Could DC win the new data center War of the Currents?

Fundamentally, electronics use DC power. The chips and circuit boards are all powered by direct current, and every computer or other piece of IT equipment that is plugged into the AC mains has to have a “power supply unit” (PSU), also known as a rectifier or switched mode power supply (SMPS) inside the box, turning the power from AC to DC. ... Data centers have an Uninterruptible Power Supply (UPS) designed to power the facility for long enough for generators to fire up. The UPS has to have a large store of batteries, and they are powered by DC. So power enters the data center as AC, is converted to DC to charge the batteries, and then back to AC for distribution to the racks. ... Data centers are now looking at using microgrids for power. That means drawing on-site energy directly from sources such as fuel cells and solar panels. As it turns out, those sources often conveniently produce direct current. A data center could be isolated from the AC grid, and live on its own microgrid. On that grid DC power sources charge batteries, and power electronics which fundamentally run on DC. In that situation, the idea of switching to AC for a short loop around the facility begins to look, well, odd.


5 key metrics for IT success

When merged, speed, quality, and value metrics are essential for any organization undergoing transformation and looking to move away from traditional project management approaches, says Sheldon Monteiro, chief product officer at digital consulting firm Publicis Sapient. “This metric isn’t limited to a specific role or level within an IT organization,” he explains. “It’s relevant for everyone involved in the product development process.” Speed, quality, and value metrics represent a shift from traditional project management metrics focused on time, scope, and cost. “Speed ensures the ability to respond swiftly to change, quality guarantees that changes are made without compromising the integrity of systems, and value ensures that the changes contribute meaningfully to both customers and the business,” Monteiro says. “This holistic approach aligns IT practices with the demands of a continuously evolving landscape.” Focusing on speed, quality, and value provides a more nuanced understanding of an organization’s adaptability and effectiveness. “Focusing on speed, quality, and value provides insights into an organization’s ability to adapt to continuous change,” Monteiro says. 


The future of cybersecurity: Anticipating changes with data analytics and automation

In recent years, cybersecurity threats have undergone a notable evolution, marked by the subtler tactics of mature threat actors who now leave fewer artifacts for analysis. The old metaphor ‘looking for a needle in a haystack’ (to describe the detection of malicious activity) is now more akin to ‘looking for a needle in a stack of needles.’ This shift necessitates the establishment of additional context around suspicious events to effectively differentiate legitimate from illegitimate activities. Automation emerges as a pivotal element in providing this contextual enrichment, ensuring that analysts can discern relevant circumstances amid the rapid and expansive landscape of modern enterprises. The landscape of cyber threats continues to further evolve, and recent high-profile data breaches underscore the gravity of the shift. In response to these challenges, data analytics and automation play a crucial role in detecting lateral movement, privilege escalation, and exfiltration, particularly when threat actors exploit zero-day vulnerabilities to gain entry into an environment.


Significance of protecting enterprise data

In a world where data fuels innovation and growth, protecting enterprise data is not optional; it’s essential. The digital age has ushered in a complex threat landscape, necessitating a multifaceted approach to data protection. From next-gen SOCs and application security to IAM, data privacy, and collaboration with SaaS providers, every aspect plays a vital role. As traditional security tools and firewalls are no longer sufficient to detect and respond to modern threats, next-generation security operations centres (SOCs) can play a proactive role by leveraging technologies like AI, machine learning, and user behavior analytics. They can analyse huge volumes of data in real-time to detect even the most well-hidden attacks. Early detection and quick response are crucial to minimise damage from security incidents. Next-gen SOCs play a pivotal role in safeguarding enterprises by enhancing visibility, shortening response times, and reducing security risks. Protecting applications is equally important, as in the digital age, applications are the conduit through which data flows. Many successful breaches target exploitable vulnerabilities residing in the application layer, indicating the need for enterprise IT departments to be extra vigilant about application security. 


A changing world requires CISOs to rethink cyber preparedness

A cybersecurity posture that is societally conscious equally requires adopting certain underlying assumptions and taking preparatory actions. Foremost among these is the recognition that neutrality and complacency are anathema to one another in the context of digital threats stemming from geopolitical tension. As I recently wrote, the inherent complexity and significance of norm politicking in international affairs leads to risk that impacts cybersecurity stakeholders in nonlinear fashion. Recent conflicts support the idea that civilian hacking around major geopolitical fault lines, for instance, operates on divergent logics of operations depending on the phase of conflict that is underway. The result of such conditions should not be a reluctance to make statements or take actions that avoid geopolitical relevance. Rather, cybersecurity stakeholders should clearly and actively attempt to delineate the way geopolitical threats and developments reflect the security objectives of the organization and its constituent community. They should do so in a way that is visible to that community. 


AI-powered 6G wireless promises big changes

According to Will Townsend, an analyst at Moor Insights & Strategy, things are accelerating more quickly with 6G than 5G did at the same point in its evolution. And speaking of speeds, that will also be one of the biggest and most transformative improvements of 6G over 5G, due to the shift of 6G into the terahertz spectrum range, Townsend says. “This will present challenges because it’s such a high spectrum,” he says. “But you can do some pretty incredible things with instantaneous connectivity. With terahertz, you’re going to get near-instantaneous latency, no lag, no jitter. You’re going to be able to do some sensory-type applications.” ... The new 6G spectrum also brings another benefit – an ability to better sense the environment, says Spirent’s Douglas. “The radio signal can be used as a sensing mechanism, like how sonar is used in submarines,” he says. That can allow use cases that need three-dimensional visibility and complete visualization of the surrounding environment. “You could map out the environment – the shops, buildings, everything – and create a holistic understanding of the surroundings and use that to build new types of services for the market,” Douglas says. 


What distinguishes data governance from information governance?

Data governance is primarily concerned with the proper management of data as a strategic asset within an organization. It emphasizes the accuracy, accessibility, security, and consistency of data to ensure that it can be effectively used for decision-making and operations. On the other hand, information governance encompasses a broader spectrum, dealing with all forms of information, not just data. It includes the management of data privacy, security, and compliance, as well as the handling of business processes related to both digital and physical information. ... Implementing data governance ensures that an organization's data is accurate, accessible, and secure, which is vital for operational decision-making and strategic planning. This governance type establishes the necessary protocols and standards for data quality and usage. Information governance, by managing all forms of information, helps organizations comply with legal and regulatory requirements, reduce risks, and enhance business efficiency and effectiveness. It also addresses the management of redundant, outdated, and trivial information, which can lead to cost savings and improved organizational performance.


The Future Is AI, but AI Has a Software Delivery Problem

As more developers become comfortable building AI-powered software, Act Three will trigger a new race: the ability to build, deploy and manage AI-powered software at scale, which requires continuous monitoring and validation at unprecedented levels. This is why crucial DevOps practices for delivering software at scale, like continuous integration and continuous delivery (CI/CD), will play a central role in providing a robust framework for engineering leaders to navigate the complexities of delivering AI-powered software — therefore turning these technological challenges into opportunities for innovation and competitive advantage. Just as software teams have honed practices for getting reliable, observable, available applications safely and quickly into customers’ hands at scale, AI-powered software is yet again evolving these methods. We’re experiencing a paradigm shift from the deterministic outcomes we’ve built software development practices around to a world with probabilistic outcomes. This complexity throws a wrench in the conventional yes-or-no logic that has been foundational to how we’ve tested software, requiring developers to navigate a variety of subjective outcomes.


Generative AI – Examining the Risks and Mitigations

In working with AI, we should be helping executives in the companies we are working with to understand these risks and also the potential applications and innovations that can come from Generative AI. That is why it is essential that we take a moment now to develop a strategy for dealing with Generative AI. By developing a strategy, you will be well positioned to reap the benefits from the capabilities, and will be giving your organization a head-start in managing the risks. When looking at the risks, companies can feel overwhelmed or decide that it represents more trouble than they are willing to accept and may take the stance of banning GenAI. Banning GenAI is not the answer, and will only lead to a bypassing of controls and more shadow IT. So, in the end, they will use the technology but won’t tell you. ... AI risks can be broadly categorized into three types: Technical, Ethical, and Social. Technical risks refer to the potential failures or errors of AI systems, such as bugs, hacking, or adversarial attacks. Ethical risks refer to the moral dilemmas or conflicts that arise from the use or misuse of AI, such as bias, discrimination, or privacy violations. Social risks refer to the impacts of AI on human society and culture, such as unemployment, inequality, or social unrest.



Quote for the day:

"In the end, it is important to remember that we cannot become what we need to be by remaining what we are." -- Max De Pree

Daily Tech Digest - January 07, 2024

2024 cybersecurity forecast: Regulation, consolidation and mothballing SIEMs

CISOs’ jobs are getting harder. Many are grappling with an onslaught of security threats, and now the legal and regulatory stakes are higher. The new SEC cybersecurity disclosure requirements have many CISOs concerned they’ll be left with the liability when an attack occurs. ... After the Cyber Resilience Act, policymakers and developers drive adoption of security-by-design. The CRA wisely avoided breaking the open source software ecosystem, but now the hard work starts: helping manufacturers adopt modern software development practices that will enable them to ship secure products and comply with the CRA, and driving public investment in open source software security to efficiently raise all boats. ... With the increase in digital business-as-usual, cybersecurity practitioners are already feeling lost in a deluge of inaccurate information from mushrooming multiple cybersecurity solutions coupled with a lack of cybersecurity architecture and design practices, resulting in porous cyber defenses.


Expert Insight: Adam Seamons on Zero-Trust Architecture

Zero trust goes beyond restricting access by need to know and the principle of least privilege. It’s about properly verifying access and being 110% certain that the access is legitimate. That means things like limiting access to specific criteria, such as by port or protocol, time period, IP address and/or physical location. ... A zero-trust network is about verification or double-checking. You want to be verifying not just the person, but also the device and limiting that access to specific permissions and rights that have been approved in advance. And you’re also restricting data access, particularly in situations like the example I just gave. Think of it like the difference between a key to the front door that gives you access to the whole house, and needing a key for the front door as well as separate keys for all the different rooms. ... AI and machine learning have both been used in detecting anomalies and suspicious patterns for some time, and will only continue to be used more. I expect SOCs to become increasingly reliant on AI. Getting more specific, log analysis is a key area for AI to automate. 


6 innovative and effective approaches to upskilling

Beverage maker Torani has been mixing up L&D by flipping the traditional performance review — which can be “demoralizing” — on its head. It puts the onus on future rather than past performance and on employee learning aspirations, rather than manager assessment. ... Devine adds: “With today’s shift to agile working, some firms believe yearly performance objectives and appraisals are insufficient and inflexible. They need something more frequent, nimble, and focused on feedback, skills and future needs. But you still need managers to assess performance to justify and provide transparency on promotions and pay decisions.” ... Microsoft is supporting workers across its organization gain skills related to AI — from non-techies to IT professionals and leaders. Simon Lambert, chief learning officer at Microsoft UK, says: “One lesson we’ve learned from our AI learning journey is that upskilling means far more than merely equipping employees with skills. It requires an ecosystem that fosters adaptability and continuous learning. In the face of AI-upskilling demand, employees need faster, seamless access to learning infrastructure.


2024 Data Center Un-Predictions: Five Unlikely Industry Forecasts

The potential impact of data centers on local communities is an important issue. At a recent conference in Virginia, we had activists from the community right alongside data center leaders to discuss the challenges and opportunities we face. While there were still some disconnects, we met in the middle on some critical topics around power, community engagement, and ensuring we create a more sustainable future. ... Leveraging self-driving technology, robots independently chart and traverse the data center, gathering real-time sensor data. This lets them immediately juxtapose present patterns against pre-defined norms, facilitating swift identification of deviations for human examination. In an ever more interconnected and intricate environment, this robotic technology grants decision-makers enhanced visibility, rapidity, and a breadth of intelligence that surpasses what humans or stationary cameras can provide. This advanced capability is vital for maintaining the efficiency and security of data centers in our increasingly digital world.


US DOD’s CMMC 2.0 rules lift burdens on MSPs, manufacturers

The proposed rules also let manufacturers off the hook for complying with NIST SP 800-171. SP 800-171 is a set of NIST cybersecurity rules to protect sensitive federal information. “The requirements of 171 set of cyber standards are designed for IT networks and information systems,” Metzger says. “They were never really designed for a manufacturing environment. It’s now said clearly in the proposed rules that the assessments won’t apply to operational technology." "That, to me, should cause manufacturers to breathe a huge sigh of relief because being required to meet NIST standards that simply don’t fit a manufacturing or OT environment is a recipe for trouble of many forms," Metzger says. “The most important change is what did not change. The document has essentially the same structure and strategy that was in 1.0. It requires third-party assessments for a very large number of defense suppliers.” The proposed version 2.0 of the CMMC rules was published in the Federal Register December 26. Interested parties have until February 26 to file comments with the DOD before the agency finalizes the rules.


Banking Innovation is Paramount Even as Regulatory and Competitive Pressures Mount

Guiding technology-forward regulations can empower banks to harness innovation, enhancing security, transparency, and customer value. Regulators should seek thoughtful oversight that encourages innovation while safeguarding against excessive risks instead of attempting to prevent the recurrence of a once-in-a-century financial crisis. Banks face a growing challenge to their market share from alternative lending platforms, which poses an existential threat, as noted in McKinsey’s 2023 Global Banking Annual Review. Over 70% of the growth in global financial assets since 2015 has shifted away from traditional bank lending, finding its way into private markets, institutional investors and the realm of “shadow banking.” Near-zero interest rates have enabled private equity firms and non-bank lenders to offer lower-cost loans. With its digitally savvy consumer base, the fintech sector has further accelerated this transition, particularly during the pandemic.


IT and OT cybersecurity: A holistic approach

As OT becomes more interconnected, the need to safeguard OT systems against cyber threats is paramount. Many cyber threats and vulnerabilities specifically target OT systems, which emphasizes the potential impact on industrial operations. Many OT systems still use legacy technologies and protocols that may have inherent vulnerabilities, as they were not designed with modern cybersecurity standards in mind. They may also use older or insecure communication protocols that may not encrypt data, making them susceptible to eavesdropping and tampering. Concerns about system stability often lead OT environments to avoid frequent updates and patches. This can leave systems exposed to known vulnerabilities. OT systems are not immune to social engineering attacks either. Insufficient training and awareness among OT personnel can lead to unintentional security breaches, such as clicking on malicious links or falling victim to social engineering attacks. Supply chain risks also pose a threat, as third-party suppliers and vendors may introduce vulnerabilities into OT systems if their products or services are not adequately secured.


Exploring the Future of Information Governance: Key Predictions for 2024

In today’s rapidly evolving digital landscape, information governance has become a collective responsibility. Looking ahead to 2024, we can anticipate a significant shift towards closer collaboration between the legal, compliance, risk management, and IT departments. This collaborative effort aims to ensure comprehensive data management and robust protection practices across the entire organization. By adopting a holistic approach and providing cross-functional training, companies can empower their workforce to navigate the complexities of information governance with confidence, enabling them to make informed decisions and mitigate potential risks effectively. Embracing this collaborative mindset will be crucial for organizations to adapt and thrive in an increasingly data-driven world. ... Blockchain technology, with its decentralized and immutable nature, has the tremendous potential to revolutionize information governance across industries. By 2024, as businesses continue to recognize the benefits, we can expect a significant increase in the adoption of blockchain for secure and transparent transaction ledgers.


Data Professional Introspective: Demystifying Data Culture

We are discussing data culture from several points of view: what new content should be added to the DCAM, where would it fall within the current framework structure, what changes we propose to that structure, what modifications should be made to existing content, how the new/modified content would be assessed, and so on. ... One can begin decomposing data culture from a high-level vision, which summarizes what the organization has accomplished when it can feel confident in asserting that, “We have a strong data culture.” One can also compile a collection of activities and behaviors that demonstrate a developed data culture, and then categorize them and parse them into the DCAM. Or, one can apply a combination of the two approaches, which is the path the working group has followed. The working definition posited to date includes a summary description of a strong data culture: “A strong data culture promotes data-driven decision-making, data transparency, and the alignment of data and analytics to business objectives. It prioritizes strategic data use and encourages sharing and collaboration around data.”


Tackling technical debt in the insurance industry

The impact of technical debt on insurers spans various dimensions. Data inefficiencies arise, leading to compliance issues and difficulties in recruiting and retaining talent. Outdated processes hinder optimal decision-making, impacting both established and newer insurers. Addressing technical debt requires insurers to foster a culture of change, emphasising the risks of neglecting this issue and aligning strategies with broader organisational objectives. Tackling technical debt involves immediate action, prioritised backlog creation, and adaptive development processes. Insurers are advised to navigate technical debt through a combination of incremental and transformational changes. Incremental adjustments and breakthrough advancements should complement comprehensive restructuring efforts for sustained and effective resolution. The roadmap to a resilient, innovative future in insurance hinges on proactive management of technical debt. Insurers must embark on their journey towards pricing transformation to remain competitive and future-ready.



Quote for the day:

"In the end, it is important to remember that we cannot become what we need to be by remaining what we are." -- Max De Pree