Showing posts with label data integrity. Show all posts
Showing posts with label data integrity. Show all posts

Daily Tech Digest - June 02, 2025


Quote for the day:

"The best way to predict the future is to create it." -- Peter Drucker


Doing nothing is still doing something

Here's the uncomfortable truth, doing nothing is still doing something – and very often, it's the wrong thing. We saw this play out at the start of the year when Donald Trump's likely return to the White House and the prospect of fresh tariffs sent ripples through global markets. Investors froze, and while the tariffs have been shelved (for now), the real damage had already been done – not to portfolios, but to behaviour. This is decision paralysis in action. And in my experience, it's most acute among entrepreneurs and high-net-worth individuals post-exit, many of whom are navigating wealth independently for the first time. It's human nature to crave certainty, especially when it comes to money, but if you're waiting for a time when everything is calm, clear, and safe before investing or making a financial decision, I've got bad news – that day is never going to arrive. Markets move, the political climate is noisy, the global economy is always in flux. If you're frozen by fear, your money isn't standing still – it's slipping backwards. ... Entrepreneurs are used to taking calculated risks, but when it comes to managing post-exit wealth or personal finances, many find themselves out of their depth. A little knowledge can be a dangerous thing – and half-understanding the tax system, the economy, or the markets can lead to costly mistakes.


The Future of Agile Isn’t ‘agile’

One reason is that agilists introduced too many conflicting and divergent approaches that fragmented the market. “Agile” meant so many things to different people that hiring managers could never predict what they were getting when a candidate’s resume indicated s/he was “experienced in agile development.” Another reason organizations failed to generate value with “agile” was that too many agile approaches focused on changing practices or culture while ignoring the larger delivery system in which the practices operate, reinforcing a culture that is resistant to change. This shouldn’t be a surprise to people following our industry, as my colleague and LeadingAgile CEO Mike Cottmeyer has been talking about why agile fails for over a decade, such as his Agile 2014 presentation, Why is Agile Failing in Large Enterprises… …and what you can do about it. The final reason that led “agile” to its current state of disfavor is that early in the agile movement there was too much money to be made in training and certifications. The industry’s focus on certifications had the effect over time of misaligning the goals of the methodology / training companies and their customers. “Train everyone. Launch trains” may be a short-term success pattern for a methodology purveyor, but it is ultimately unsustainable because the training and practices are too disconnected from tangible results senior executives need to compete and win in the market.


CIOs get serious about closing the skills gap — mainly from within

Staffing and talent issues are affecting CIOs’ ability to double down on strategic and innovation objectives, according to 54% of this year’s respondents. As a result, closing the skills gap has become a huge priority. “What’s driving it in some CIOs’ minds is tied back to their AI deployments,” says Mark Moccia, a vice president research director at Forrester. “They’re under a lot of cost pressure … to get the most out of AI deployments” to increase operational efficiencies and lower costs, he says. “It’s driving more of a need to close the skills gap and find people who have deployed AI successfully.” AI, generative AI, and cybersecurity top the list of skills gaps preventing organizations from achieving objectives, according to an April Gartner report. Nine out of 10 organizations have adopted or plan to adopt skills-based talent growth to address those challenges. ... The best approach, Karnati says, is developing talent from within. “We’re equipping our existing teams with the space, tools, and support needed to explore genAI through practical application, including rapid prototyping, internal hackathons, and proof-of-concept sprints,” Karnati says. “These aren’t just technical exercises — they’re structured opportunities for cross-functional learning, where engineers, product leads, and domain experts collaborate to test real use cases.”


The Critical Quantum Timeline: Where Are We Now And Where Are We Heading?

Technically, the term is fault-tolerant quantum computing. The qubits that quantum computers use to process data have to be kept in a delicate state – sometimes frozen to temperatures very close to absolute zero – in order to stay stable and not “decohere”. Keeping them in this state for longer periods of time requires large amounts of energy but is necessary for more complex calculations. Recent research by Google, among others, is pointing the way towards developing more robust and resilient quantum methods. ... One of the most exciting prospects ahead of us involves applying quantum computing to AI. Firstly, many AI algorithms involve solving the types of problems that quantum computers excel at, such as optimization problems. Secondly, with its ability to more accurately simulate and model the physical world, it will generate huge amounts of synthetic data. ... Looking beyond the next two decades, quantum computing will be changing the world in ways we can’t even imagine yet, just as the leap to transistors and microchips enabled the digital world and the internet of today. It will tackle currently impossible problems, help us create fantastic new materials with amazing properties and medicines that affect our bodies in new ways, and help us tackle huge problems like climate change and cleaning the oceans.


6 hard truths security pros must learn to live with

Every technological leap will be used against you - Information technology is a discipline built largely on rapid advances. Some of these technological leaps can help improve your ability to secure the enterprise. But every last one of them brings new challenges from a security perspective, not the least of which is how they will be used to attack your systems, networks, and data. ... No matter how good you are, your organization will be victimized - This is a hard one to swallow, but if we take the “five stages of grief” approach to cybersecurity, it’s better to reach the “acceptance” level than to remain in denial because much of what happens is simply out of your control. A global survey of 1,309 IT and security professionals found that 79% of organizations suffered a cyberattack within the past 12 months, up from 68% just a year ago, according to cybersecurity vendor Netwrix’s Hybrid Security Trends Report. ... Breach blame will fall on you — and the fallout could include personal liability - As if getting victimized by a security breach isn’t enough, new Securities and Exchange Commission (SEC) rules put CISOs in the crosshairs for potential criminal prosecution. The new rules, which went into effect in 2023, require publicly listed companies to report any material cybersecurity incident within four business days.


Are you an A(I)ction man?

Whilst individually AI-generated action figures have a small impact - a drop in the ocean you could say - trends like this exemplify how easy it is to use AI en masse, and collectively create an ocean of demand. Seeing the number of individuals, even those with knowledge of AI’s lofty resource consumption, partaking in the creation of these avatars, makes me wonder if we need greater awareness of the collective impact of GenAI. Now, I want to take a moment to clarify this is not a criticism of those producing AI-generated content, or of anyone who has taken part in the ‘action figure’ trend. I’ve certainly had many goes with DALL-E for fun, and taken part in various trends in my time, but the volume of these recent images caught my attention. Many of the conversations I had at Connect New York a few weeks ago addressed sustainability and the need for industry collaboration, but perhaps we should also be instilling more awareness from an end-user point of view. After all, ChatGPT, according to the Washington Post, consumes 39.8 million kWh per day. I’d be fascinated to see the full picture of power and water consumption from the AI-generated action figures. Whilst it will only account for a tiny fraction of overall demand, these drops can have a tendency to accumulate. 


The MVP Dilemma: Scale Now or Scale Later?

Teams often have few concrete requirements about scalability. The business may not be a reliable source of information but, as we noted above, they do have a business case that has implicit scalability needs. It’s easy for teams to focus on functional needs, early on, and ignore these implicit scaling requirements. They may hope that scaling won’t be a problem or that they can solve the problem by throwing more computing resources at it. They have a legitimate concern about overbuilding and increasing costs, but hoping that scaling problems won't happen is not a good scaling strategy. Teams need to consider scaling from the start. ... The MVP often has implicit scalability requirements, such as "in order for this idea to be successful we need to recruit ten thousand new customers". Asking the right questions and engaging in collaborative dialogue can often uncover these. Often these relate to success criteria for the MVP experiment. ... Some people see asynchronous communication as another scaling panacea because it allows work to proceed independently of the task that initiated the work. The theory is that the main task can do other things while work is happening in the background. So long as the initiating task does not, at some point, need the results of the asynchronous task to proceed, asynchronous processing can help a system to scale. 


Data Integrity: What It Is and Why It Matters

By contrast, data quality builds on methods for confirming the integrity of the data and also considers the data’s uniqueness, timeliness, accuracy, and consistency. Data is considered “high quality” when it ranks high in all these areas based on the assessment of data analysts. High-quality data is considered trustworthy and reliable for its intended applications based on the organization’s data validation rules. The benefits of data integrity and data quality are distinct, despite some overlap. Data integrity allows a business to recover quickly and completely in the event of a system failure, prevent unauthorized access to or modification of the data, and support the company’s compliance efforts. By confirming the quality of their data, businesses improve the efficiency of their data operations, increase the value of their data, and enhance collaboration and decision-making. Data Quality efforts also help companies reduce their costs, enhance employee productivity, and establish closer relationships with their customers. Implementing a data integrity strategy begins by identifying the sources of potential data corruption in your organization. These include human error, system malfunctions, unauthorized access, failure to validate and test, and lack of Governance. A data integrity plan operates at both the database level and business level.


Backup-as-a-service explained: Your guide to cloud data protection

With BaaS, enterprises have quick, easy access to their data. Providers store multiple copies of backups in different locations so that data can be recovered when lost due to outages, failures or accidental deletion. BaaS also features geographic distribution and automatic failover, when data handling is automatically moved to a different server or system in the event of an incident to ensure that it is safe and readily available. ... With BaaS, the provider uses its own cloud infrastructure and expertise to handle the entire backup and restoration process. Enterprises simply connect to the backup engine, set their preferences and the platform handles file transfer, encryption and maintenance. Automation is the engine that drives BaaS, helping ensure that data is continuously backed up without slowing down network performance or interrupting day-to-day work. Enterprises first select the data they need backed up — whether it be simple files or complex apps — backup frequency and data retention times. ... Enterprises shouldn’t just jump right into BaaS — proper preparation is critical. Firstly, it is important to define a backup policy that identifies the organization’s critical data that must be backed up. This policy should also include backup frequency, storage location and how long copies should be retained.


CISO 3.0: Leading AI governance and security in the boardroom

AI is expanding the CISO’s required skillset beyond cybersecurity to include fluency in data science, machine learning fundamentals, and understanding how to evaluate AI models – not just technically, but from a governance and risk perspective. Understanding how AI works and how to use it responsibly is essential. Fortunately, AI has also evolved how we train our teams. For example, adaptive learning platforms that personalize content and simulate real-world scenarios are assisting in closing the skills gap more effectively. Ultimately, to become successful in the AI space, both CISOs and their teams will need to grasp how AI models are trained, the data they rely on, and the risks they may introduce. CISOs should always prioritize accountability and transparency. Red flags to look out for include a lack of explainability or insufficient auditing capabilities, both of which leave companies vulnerable. It’s important to understand how it handles sensitive data, and whether it has proven success in similar environments. Beyond that, it’s also vital to evaluate how well the tool aligns with your governance model, that it can be audited, and that it integrates well into your existing systems. Lastly, overpromising capabilities or providing an unclear roadmap for support are signs to proceed with caution.

Daily Tech Digest - February 07, 2025


Quote for the day:

"Doing what you love is the cornerstone of having abundance in your life." -- Wayne Dyer


Data, creativity, AI, and marketing: where do we go from here?

While causes of inefficient data coordination vary, silos remain the most frequent offender. There is still a widespread tendency to collect and store data in isolated buckets that are often made all the more challenging by lingering reliance on manual processing — as underscored by the fact four in ten cross-industry employees cite structuring, preparing and manipulating information among their top data difficulties. Therefore, a sizable number of organizations are working with fragmented and inconsistent data that requires time-consuming wrangling and is often subject to human error. The obvious problem this poses is a lack of the comprehensive data to inform sound decisions. At the AI-assisted marketing level, faulty data has a high potential to jeopardise creative efforts; resulting in irrelevant ads that miss their mark for target audiences and brand goals and misguided strategic moves based on skewed analysis. Of course, there are no quick fixes to tackle these complications. But businesses can reach greater data maturity and efficacy by reconfiguring their orchestration methods. With a streamlined system that persistently delivers consolidated data, marketers will be equipped to extract key performance and consumer insights that steer refined and precise AI-enhanced activity.


How AI is transforming strategy development

Beyond these well-understood risks, gen AI presents five additional considerations for strategists. First, it elevates the importance of access to proprietary data. Gen AI is accelerating a long-term trend: the democratization of insights. It has never been easier to leverage off-the-shelf tools to rapidly generate insights that are the building blocks of any strategy. As the adoption of AI models spreads, so do the consequences of relying on commoditized insights. After all, companies that use generic inputs will produce generic outputs, which lead to generic strategies that, almost by definition, lead to generic performance or worse. As a result, the importance of curating proprietary data ecosystems (more on these below) that incorporate quantitative and qualitative inputs will only increase. Second, the proliferation of data and insights elevates the importance of separating signal from noise. This has long been a challenge, but gen AI compounds it. We believe that as the technology matures, it will be able to effectively pull out the signals that matter, but it is not there yet. Third, as the ease of insight generation grows, so does the value of executive-level synthesis. Business leaders—particularly those charged with making strategic decisions—cannot operate effectively if they are buried in data, even if that data is nothing but signal. 


Why Cybersecurity Is Everyone’s Responsibility

Ultimately, cybersecurity is everyone’s responsibility because the fallout affects us all when something goes wrong. When a company goes through a data breach – say it’s ransomware – a number of people are held to task, and even more are impacted. First, the CEO and CISO will rightly be held accountable. Next, security managers will bear their share of the blame and be scrutinized for how they handled the situation. Then, laws and lawmakers will be audited to see if the proper rules were in place. The organization will be investigated for compliance violations, and if found guilty, will pay regulatory fines, legal costs, and maybe lose professional licenses. If the company cannot recover from the reputational damage, revenue will be lost, and jobs will be cut. Lastly, and most importantly, the users who lost their data can likely be impacted for years, even a lifetime. Bank accounts and credit cards will need to be changed, identity theft will be a pressing risk, and in the case of healthcare data breaches, sensitive, unchangeable information could be leaked or used as blackmail against the victims. ... The burden of cybersecurity rests with us all. There is an old saying attributed to Dale Carnegie: “Here lies the body of William Jay, who died maintaining his right of way— He was right, dead right, as he sped along, but he’s just as dead as if he were wrong.”


Spy vs spy: Security agencies help secure the network edge

“Products designed with Secure by Design principles prioritize the security of customers as a core business requirement, rather than merely treating it as a technical feature,” the introductory web page said. “During the design phase of a product’s development lifecycle, companies should implement Secure by Design principles to significantly decrease the number of exploitable flaws before introducing them to the market for widespread use or consumption. Out-of-the-box, products should be secure with additional security features such as multi-factor authentication (MFA), logging, and single sign-on (SSO) available at no extra cost.” ... However, she doesn’t feel that lumping together internet connected firewalls, routers, IoT devices, and OT systems in an advisory is helpful to the community, and “neither is calling them ‘edge devices,’ because it assumes that enterprise IT is the center of the universe and the ‘edge’ is out there.” “That may be true for firewalls, routers, and VPN gateways, but not for OT systems,” she continued. ... Many are internet connected to support remote operations and maintenance, she noted, so “the goal there should be to give advice on how to remote into those systems securely, and the tone of the advisories should be targeted to the production realities where IT security tools and processes are not always a good idea.”


Will the end of Windows 10 accelerate CIO interest in AI PCs?

“The vision around AI PCs is that, over time, more of the models, starting with small language models, and then quantized large language models … more of those workloads will happen locally, faster, with lower latency, and you won’t need to be connected to the internet and it should be less expensive,” the IDC analyst adds. “You’ll pay a bit more for an AI PC but [the AI workload is] not on the cloud and then arguably there’s more profit and it’s more secure.” ... “It’s smart for CIOs to consider some early deployments of these to bring the AI closer to the employees and processes,” Melby says. “A side benefit is that it keeps the compute local and reduces cyber risk to a degree. But it takes a strategic view and precision targeting. The costs of AI PCs/laptops are at a premium right now, so we really need a compelling business case, and the potential for reduced cloud costs could help break loose those kinds of justifications.” Not all IT leaders are on board with running AI on PCs and laptops. “Unfortunately, there are many downsides to this approach, including being locked into the solution, upgrades becoming more difficult, and not being able to benefit from any incremental improvements,” says Tony Marron, managing director of Liberty IT at Liberty Mutual.


Self-sovereign identity could transform fraud prevention, but…

Despite these challenges, SSI has the potential to be a powerful tool in the fight against fraud. Consider the growing use of mobile driver’s licenses (mDLs). These digital credentials allow users to prove their identity quickly and securely without exposing unnecessary personal information. Unlike traditional forms of identification, which often reveal more data than needed, SSI-based credentials operate on the principle of minimal disclosure, only sharing the required details. This limits the amount of exploitable information in circulation and reduces identity theft risk. Another promising area is passwordless authentication. For years, we’ve talked about the death of the password, yet reliance on weak, easily compromised credentials persists. SSI could accelerate the transition to more secure authentication mechanisms, using biometrics and cryptographic certificates instead of passwords. By eliminating centralized repositories of login credentials, businesses can significantly reduce the risk of credential-stuffing attacks and phishing attempts. However, the likelihood of a fully realized SSI wallet that consolidates identity documents, payment credentials and other sensitive information remains low, at least in the near future. The convenience factor isn’t there yet, and without significant consumer demand, businesses have little motivation to push for mass adoption.


The Staging Bottleneck: Microservices Testing in FinTech

Two common scaling strategies exist: mocking dependencies, which sacrifices fidelity and risks failures in critical integrations, or duplicating staging environments, which is costly and complex due to compliance needs. Teams often resort to shared environments, causing bottlenecks, interference and missed bugs — slowing development and increasing QA overhead. ... By multiplexing the baseline staging setup, sandboxes provide tailored environments for individual engineers or QA teams without adding compliance risks or increasing maintenance burdens, as they inherit the same compliance and configuration frameworks as production. These environments allow teams to work independently while maintaining fidelity to production conditions. Sandboxes integrate seamlessly with external APIs and dependencies, replicating real-world scenarios such as rate limits, timeouts and edge cases. This enables robust testing of workflows and edge cases while preserving isolation to avoid disruptions across teams or systems. ... By adopting sandboxes, FinTech organizations can enable high-quality, efficient development cycles, ensuring compliance while unlocking innovation at scale. This paradigm shift away from monolithic staging environments toward dynamic, scalable sandboxes gives FinTech companies a critical competitive advantage.


From Code to Culture: Adapting Workplaces to the AI Era

As AI renovates industries, it also exposes a critical gap in workforce readiness. The skills required to excel in an AI-driven world are evolving rapidly, and many employees find their current capabilities misaligned with these new demands. In this context, reskilling is not just a response to technological disruption; it is a strategic necessity for ensuring long-term organisational resilience. Today’s workforce is broadening its skillset at an unprecedented pace. Professionals are acquiring 40% more diverse skills compared to five years ago, reflecting the growing need to adapt to the complexities of AI-integrated workplaces. AI literacy has emerged as a crucial area of focus, encompassing abilities like prompt engineering and proficiency with tools. ... Beyond its operational benefits, AI is reimagining innovation and strategic decision-making in a volatile business environment characterised by economic uncertainty and rapid technological shifts. However, organisations must tread carefully. AI is not a panacea, and its effectiveness depends on thoughtful implementation. Ethical considerations like data privacy, algorithmic bias, and the potential for job displacement must be addressed to ensure that AI augments rather than undermines human potential. Transparent communication about AI’s role in the workplace can foster trust and help employees understand its benefits.


CIOs and CISOs grapple with DORA: Key challenges, compliance complexities

“As often happens with such ambitious regulations, the path to compliance is particularly complex,” says Giuseppe Ridulfo, deputy head of the organization department and head of IS at Banca Etica. “This is especially true for smaller entities, such as Banca Etica, which find themselves having to face significant structural challenges. DORA, although having shared objectives, lacks a principle of proportionality that takes into account the differences between large institutions and smaller banks.” This is compounded for smaller organizations due to the prevalence of outsourcing for these firms, Ridulfo explains. “This operating model, which allows access to advanced technologies and skills, clashes with the stringent requirements of the regulation, in particular those that impose rigorous control over third-party suppliers and complex management of contracts relating to essential or important functions,” he says. ... The complexity of DORA, therefore, is not in the text itself, although substantial, but in the work it entails for compliance. As Davide Baldini, lawyer and partner of the ICT Legal Consulting firm, points out, “DORA is a very clear law, as it is a regulation, which is applied equally in all EU countries and contains very detailed provisions. 


True Data Freedom Starts with Data Integrity

Data integrity is essential to ensuring business continuity, and the movement of data poses a significant risk. A lack of pre-migration testing is the main cause of issues such as data corruption and data loss during the movement of data. These issues lead to unexpected downtime, reputational damage, and loss of essential information. As seen by this year’s global incident, one fault, no matter how small, can result in a significant negative impact on the business and its stakeholders. This incident sends a clear message – testing before implementation is essential. Without proper testing, organizations cannot identify potential issues and implement corrective measures. ... This includes testing for both functionality, or how well the system operates after migration, and economics, the cost-effectiveness of the system or application. Functionality testing ensures a system continues to meet expectations. Economics testing involves examining resource consumption, service costs and overall scalability to ascertain whether the solution is economically sustainable for the business. This is particularly important with cloud-based migrations. While organizations can manually conduct these audits, tools on the market can also help can conduct regular automated data integrity audits. 


Daily Tech Digest - October 20, 2024

6 Strategies for Overcoming the Weight of Process Debt

While technical debt is a more familiar concept stemming from software development that describes the cost of taking shortcuts or using quick fixes in code, process debt relates to inefficiencies and redundancies within organizational workflows and procedures. Process debt can also have far-reaching effects that are often less obvious to business leaders, making it an insidious force that can silently undermine business operations. ... Rather than simply adding a new technology into an old process or duplicating legacy steps in a new application, organizations need to undertake a detailed audit of existing processes to uncover inefficiencies, redundancies, and inaccuracies that contribute to process debt. This audit should involve a systematic review of all workflows, procedures, and operational activities to identify areas where performance is falling short or where resources are being wasted. To gain a deeper understanding, leverage process mapping tools to create visual representations of workflows. These tools allow you to document each step of a process, highlight how tasks flow between different departments or systems, and uncover hidden bottlenecks or points of friction.


Domain-specific GenAI is Coming to a Network Near You

Now, we're seeing domain-specific models crop up. These are specialized models that focus on some industry or incorporate domain best practices that can be centrally trained and then deployed and fine-tuned by organizations. They are built on specific knowledge sets rather than the generalized corpus of information on which conversational AI is trained. ... By adopting domain-specific generative AI, companies can achieve more accurate and relevant outcomes, reducing the risks associated with general-purpose models. This approach not only enhances productivity but also aligns AI capabilities with specific business needs. ... The question now is whether this specialization can be applied to domains like networking, security, and application delivery. Yes, but no. The truth is that predictive (classic) AI is going to change these technical domains forever. But it will do so from the inside-out; that is, predictive AI will deliver real-time analysis of traffic that enables an operational AI to act. That may well be generative AI if we are including agentic AI in that broad category. But GenAI will have an impact on how we operate networking, security, and application delivery. 


The human factor: How companies can prevent cloud disasters

A company’s post-mortem process reveals a great deal about its culture. Each of the top tech companies require teams to write post-mortems for significant outages. The report should describe the incident, explore its root causes and identify preventative actions. The post-mortem should be rigorous and held to a high standard, but the process should never single out individuals to blame. Post-mortem writing is a corrective exercise, not a punitive one. If an engineer made a mistake, there are underlying issues that allowed that mistake to happen. Perhaps you need better testing, or better guardrails around your critical systems. Drill down to those systemic gaps and fix them. Designing a robust post-mortem process could be the subject of its own article, but it’s safe to say that having one will go a long way toward preventing the next outage. ... If engineers have a perception that only new features lead to raises and promotions, reliability work will take a back seat. Most engineers should be contributing to operational excellence, regardless of seniority. Reward reliability improvements in your performance reviews. Hold your senior-most engineers accountable for the stability of the systems they oversee.


Ransomware siege: Who’s targeting India’s digital frontier?

Small and medium-sized businesses (SMBs) are often the most vulnerable. This past July, a ransomware attack forced over 300 small Indian banks offline, cutting off access to essential financial services for millions of rural and urban customers. This disruption has severe consequences in a country where digital banking and online financial services are becoming lifelines for people’s day-to-day transactions. According to a report by Kaspersky, 53% of Indian SMBs experienced ransomware attacks in 2023, with 559 million attacks occurring between April and May of this year, making them the most targeted segment. ... For SMBs, the cost of paying ransomware, retrieving proprietary data, returning to full operations, and recovering lost revenue can be too much to bear. For this reason, many businesses opt to pay the ransom, even when there is no guarantee that their data will be fully restored. The Indian financial sector, in particular, has been a favourite target. This year the National Payment Corporation of India (NPCI), which runs the country’s digital payment systems, was forced to take systems offline temporarily due to an attack. Beyond the financial impact, these incidents erode trust in India’s push for a digital-first economy, impacting the country’s progress toward digital banking adoption.


What AMD and Intel’s Alliance Means for Data Center Operators

AMD and Intel’s alliance was a surprise for many. But industry analysts said their partnership makes sense and is much needed, given the threat that Arm poses in both the consumer and data center space. While x86 processors still dominate the data center space, Arm has made inroads with cloud providers Amazon Web Services, Google Cloud and Microsoft Azure building their own Arm-based CPUs and startups like Ampere having entered the market in recent years. Intel and AMD’s partnership confirms how strong Arm is as a platform in the PC, data center and smartphone markets, the Futurum Group's Newman said. But the two giant chipmakers still have the advantage of having a huge installed base and significant market share. Through the new x86 advisory group, AMD and Intel can benefit by making it easier for data center operators to leverage x86, he said. “This partnership is about the experience of the x86 customer base, trying to make it stickier and trying to give them less reason to potentially move off of the platform is valuable,” Newman said. “x86’s longevity will benefit meaningfully from less complexity and making it easier for customers.”


Cyber resilience is improving but managing systemic risk will be key

“Cyber insurance is recognised as a core component of a robust cyber risk management strategy. While we have seen fluctuations in cyber rates and capacity over the last five years, more recently we have seen rates softening in the market,” Cotelle said. “The emergence and adoption of AI has clear potential to revolutionise how businesses operate, which will create new opportunities but also new exposures. “In the cyber risk context, AI is a double-edged sword. First, it can be exploited by threat actors to conduct more sophisticated attacks between agencies to address ransomware,” he said. ... He stressed, however, that one of the biggest challenges facing the cyber market is how it understands and manages systemic cyber risks. He said there is a case for considering the use of reinsurance pools and public/private partnerships to do this. “The continued attractiveness of the cyber insurance solution is paramount to the sustainability and growth of the market. “In recent years, we have seen work by insurers to clarify particular aspects of coverage relating to areas such as cyber-related property damage, cyber war or infrastructure which has led to coverage restrictions.”


Cyber resilience vs. cybersecurity: Which is more critical?

A common misconception is that cyber resilience means strong cybersecurity and that the organization won’t be compromised because their defenses are impenetrable. No defense is ever 100 percent secure because IT products have flaws and cybercriminals, and nation state-sponsored threat actors are continually changing their tactics, techniques and procedures (TTPs) to take advantage of any weaknesses they can find. And, of course, any organization with cyber resilience still needs quality cyber security in the first place. Resilience isn’t promising that bad things won’t happen; resilience promises that when they do, the organization can overcome that and continue to thrive. Cybersecurity is one of the foundations upon which resilience stands. Although cyber threats have increased in frequency and sophistication in recent years, there’s a huge amount that businesses in every sector can do to reduce the chances of being compromised and to prepare for the worst. The investment in time, energy and resources to prepare for a cyber incident is well worth it for the results you’ll see. Being cyber resilient is becoming a selling point as well. 


Building Digital Resilience: Insider Insights for a Safer Cyber Landscape

These “basics” sound simple and are not difficult to implement, but we (IT, Security teams, and the Business) routinely fail at it. We tend to focus on the fancy new tool, the shiny new dashboard, quarterly profits, or even the latest analytical application. Yes, these are important and have their place, but we should ensure we have the “basics” down to protect the business so it can focus on profit and growth. Using patching as an example, if we can patch our prioritized vulnerabilities promptly, we reduce our threat landscape, which, in turn, offers attackers fewer doors and windows into our environment. The term may seem a little dated, but defense in depth is a solid method used to defend our often-porous environments. Using multiple levels of security, such as strong passwords, multi-factor authentication, resilience training, and patching strategies, makes it harder for threat actors, so they tend to move to another target with weaker defenses. ... In an increasingly digital world, robust recovery capabilities are not just a safety net but a strategic advantage and a tactical MUST. The actions taken before and after a breach are what truly matter to reduce the costliest impacts—business interruption. 


Information Integrity by Design: The Missing Piece of Values-Aligned Tech

To have any chance of fixing our dysfunctional relationship with information, we need solutions that can take on the powerful incentives, integration scale, and economic pull of the attention economy as we know it, and realign the market. One good example is the emerging platform Readocracy, designed from the outset with features that allow users to have much more control and context over their information experience. This includes offering users control over the algorithm, providing nudges to direct attention more mindfully, and providing information on how informed commenters are on subjects on which they are commenting. ... An information integrity by design initiative can focus on promoting the six components of information integrity outlined above so readers and researchers can make informed decisions on the integrity of the information provided. Government promotion and support can drive and support corporate adoption of the concept much like it's done for security by design, privacy by design, and, most recently, safety by design. ... Information integrity deserves fierce advocacy from governments, the intellectual ingenuity of civil society, and the creative muscle of industry. 


The backbone of security: How NIST 800-88 and 800-53 compliance safeguards data centers

When discussing data center compliance, it’s important to not leave out an important player: the National Institute of Standards and Technology (NIST). NIST is one of the most widely recognized and adopted cybersecurity frameworks, is the industry’s most comprehensive and in-depth set of framework controls, and is a non-regulatory federal agency. NIST’s mission is to educate citizens on information system security for all applications outside of national security, including industry, government, academia, and healthcare on both a national and global scale. Their strict and robust standards and guidelines are widely recognized and adopted by both data centers and government entities alike seeking to improve their processes, quality, and security. ... NIST 800-88 covers various types of media, including hard drives (HDDs), solid-state drives (SSDs), magnetic tapes, optical media, and other media storage devices. NIST 800-88 has quickly become the utmost standard for the U.S. Government and has been continuously referenced in federal data privacy laws. More so, NIST 800-88 regulations have been increasingly adopted by private companies and organizations, especially data centers. 



Quote for the day:

"To have long-term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley

Daily Tech Digest - October 13, 2024

Fortifying Cyber Resilience with Trusted Data Integrity

While it is tempting to put all of the focus on keeping the bad guys out, there is an important truth to remember: Cybercriminals are persistent and eventually, they find a way in. The key is not to try and build an impenetrable wall, because that wall does not exist. Instead, organizations need to have a defense strategy at the data level. By monitoring data for signs of ransomware behavior, the spread of the attack can be slowed or even stopped. It includes analyzing data and watching for patterns that indicate a ransomware attack is in progress. When caught early, organizations have the power to stop the attack before it causes widespread damage. Once an attack has been identified, it is time to execute the curated recovery plan. That means not just restoring everything in one action but instead selectively recovering the clean data and leaving the corrupted files behind. ... Trusted data integrity offers a new way forward. By ensuring that data remains clean and intact, detecting corruption early, and enabling a faster, more intelligent recovery, data integrity is the key to reducing the damage and cost of a ransomware attack. In the end, it’s all about being prepared. 


Regulating AI Catastophic Risk Isn't Easy

Catastrophic risks are those that cause a failure of the system, said Ram Bala, associate professor of business analytics at Santa Clara University's Leavey School of Business. Risks could range from endangering all of humanity to more contained impact, such as disruptions affecting only enterprise customers of AI products, he told Information Security Media Group. Deming Chen, professor of electrical and computer engineering at the University of Illinois, said that if AI were to develop a form of self-interest or self-awareness, the consequences could be dire. "If an AI system were to start asking, 'What's in it for me?' when given tasks, the results could be severe," he said. Unchecked self-awareness might drive AI systems to manipulate their abilities, leading to disorder, and potentially catastrophic outcomes. Bala said that most experts see these risks as "far-fetched," since AI systems currently lack sentience or intent, and likely will for the foreseeable future. But some form of catastrophic risk might already be here. Eric Wengrowski, CEO of Steg.AI, said that AI's "widespread societal or economic harm" is evident in disinformation campaigns through deepfakes and digital content manipulation. 


The Importance of Lakehouse Formats in Data Streaming Infrastructure

Most data scientists spend the majority of their time updating those data in a single format. However, when your streaming infrastructure has data processing capabilities, you can update the formats of that data at the ingestion layer and land the data in the standardized format you want to analyze. Streaming infrastructure should also scale seamlessly like Lakehouse architectures, allowing organizations to add storage and compute resources as needed. This scalability ensures that the system can handle growing data volumes and increasing analytical demands without major overhauls or disruptions to existing workflows. ... As data continues to play an increasingly central role in business operations and decision-making, the importance of efficient, flexible, and scalable data architectures will only grow. The integration of lakehouse formats with streaming infrastructure represents a significant step forward in meeting these evolving needs. Organizations that embrace this unified approach to data management will be better positioned to derive value from their data assets, respond quickly to changing market conditions, and drive innovation through advanced analytics and AI applications.


Open source culture: 9 core principles and values

Whether you’re experienced or just starting out, your contributions are valued in open source communities. This shared responsibility helps keep the community strong and makes sure the projects run smoothly. When people come together to contribute and work toward shared goals, it fuels creativity and drives productivity. ... While the idea of meritocracy is incredibly appealing, there are still some challenges that come along with it. In reality, the world is not fair and people do not get the same opportunities and resources to express their ideas. Many people face challenges such as lack of resources or societal biases that often go unacknowledged in "meritocratic" situations. Essentially, open source communities suffer from the same biases as any other communities. For meritocracy to truly work, open source communities need to actively and continuously work to make sure everyone is included and has a fair and equal opportunity to contribute. ... Open source is all about how everyone gets a chance to make an impact and difference. As mentioned previously, titles and positions don’t define the value of your work and ideas—what truly matters is the expertise, work and creativity you bring to the table.


How to Ensure Cloud Native Architectures Are Resilient and Secure

Microservices offer flexibility and faster updates but also introduce complexity — and more risk. In this case, the company had split its platform into dozens of microservices, handling everything from user authentication to transaction processing. While this made scaling more accessible, it also increased the potential for security vulnerabilities. With so many moving parts, monitoring API traffic became a significant challenge, and critical vulnerabilities went unnoticed. Without proper oversight, these blind spots could quickly become significant entry points for attackers. Unmanaged APIs could create serious vulnerabilities in the future. If these gaps aren’t addressed, companies could face major threats within a few years. ... As companies increasingly embrace cloud native technologies, the rush to prioritize agility and scalability often leaves security as an afterthought. But that trade-off isn’t sustainable. By 2025, unmanaged APIs could expose organizations to significant breaches unless proper controls are implemented today. ... As companies increasingly embrace cloud native technologies, the rush to prioritize agility and scalability often leaves security as an afterthought. But that trade-off isn’t sustainable. By 2025, unmanaged APIs could expose organizations to significant breaches unless proper controls are implemented today.


Focus on Tech Evolution, Not on Tech Debt

Tech Evolution represents a mindset shift. Instead of simply repairing the system, Tech Evolution emphasises continuous improvement, where the team proactively advances the system to stay ahead of future requirements. It’s a strategic, long-term investment in the growth and adaptability of the technology stack. Tech Evolution is about future-proofing your platform. Rather than focusing on past mistakes (tech debt), the focus shifts toward how the technology can evolve to accommodate new trends, user demands, and business goals. ... One way to action Tech Evolution is to dedicate time specifically for innovation. Development teams can use innovation days, hackathons, or R&D-focused sprints to explore new ideas, tools, and frameworks. This builds a culture of experimentation and continuous learning, allowing the team to identify future opportunities for evolving the tech stack. ... Fostering a culture of continuous learning is essential for Tech Evolution. Offering training programs, hosting workshops, and encouraging attendance at conferences ensures your team stays informed about emerging technologies and best practices. 


Singapore’s Technology Empowered AML Framework

Developed by the Monetary Authority of Singapore (MAS) in collaboration withhttps://cdn.opengovasia.com/wp-content/uploads/2024/10/Article_08-Oct-2024_1-Sing-1270-1.jpg six major banks, COSMIC is a centralised digital platform for global information sharing among financial institutions to combat money laundering, terrorism financing, and proliferation financing, enhancing defences against illicit activities. By pooling insights from different financial entities, COSMIC enhances Singapore’s ability to detect and disrupt money laundering schemes early, particularly when transactions cross international borders(IMC Report). Another significant collaboration is the Anti-Money Laundering/Countering the Financing of Terrorism Industry Partnership (ACIP). This partnership between MAS, the Commercial Affairs Department (CAD) of the Singapore Police Force, and private-sector financial institutions allows for the sharing of best practices, the issuance of advisories, and the development of enhanced AML measures. ... Another crucial aspect of Singapore’s AML strategy is the AML Case Coordination and Collaboration Network (AC3N). This new framework builds on the Inter-Agency Suspicious Transaction Reports Analytics (ISTRA) task force to improve coordination between all relevant agencies.


Future-proofing Your Data Strategy with a Multi-tech Platform

Traditional approaches that were powered by a single tool or two, like Apache Cassandra or Apache Kafka, were once the way to proceed. However, now used alone, these tools are proving insufficient to meet the demands of modern data ecosystems. The challenges presented by today’s distributed, real-time, and unstructured data have made it clear that businesses need a new strategy. Increasingly, that strategy involves the use of a multi-tech platform. ... Implementing a multi-tech platform can be complex, especially considering the need to manage integrations, scalability, security, and reliability across multiple technologies. Many organizations simply do not have the time or expertise in the different technologies to pull this off. Increasingly, organizations are partnering with a technology provider that has the expertise in scaling traditional open-source solutions and the real-world knowledge in integrating the different solutions. That’s where Instaclustr by NetApp comes in. Instaclustr offers a fully managed platform that brings together a comprehensive suite of open-source data technologies. 


Strong Basics: The Building Blocks of Software Engineering

It is alarmingly easy to assume a “truth” on faith when, in reality, it is open to debate. Effective problem-solving starts by examining assumptions because the assumptions that survive your scrutiny will dictate which approaches remain viable. If you didn’t know your intended plan rested on an unfounded or invalid assumption, imagine how disastrously it would be to proceed anyway. Why take that gamble? ... Test everything you design or build. It is astounding how often testing gets skipped. A recent study showed that just under half of the time, information security professionals don’t audit major updates to their applications. It’s tempting to look at your application on paper and reason that it should be fine. But if everything worked like it did on paper, testing would never find any issues — yet so often it does. The whole point of testing is to discover what you didn’t anticipate. Because no one can foresee everything, the only way to catch what you didn’t is to test. ... companies continue to squeeze out more productivity from their workforce by adopting the cutting-edge technology of the day, generative AI being merely the latest iteration of this trend. 


The resurgence of DCIM: Navigating the future of data center management

A significant factor behind the resurgence of DCIM is the exponential growth in data generation and the requirement for more infrastructure capacity. Businesses, consumers, and devices are producing data at unprecedented rates, driven by trends such as cloud computing, digital transformation, and the Internet of Things (IoT). This influx of data has created a critical demand for advanced tools that can offer comprehensive visibility into resources and infrastructure. Organizations are increasingly seeking DCIM solutions that enable them to efficiently scale their data centers to handle this growth while maintaining optimal performance. ... Modern DCIM solutions, such as RiT Tech’s XpedITe, also leverage AI and machine learning to provide predictive maintenance capabilities. By analyzing historical data and identifying patterns, it will predict when equipment is likely to fail and automatically schedule maintenance ahead of any failure as well as providing automation of routine tasks such as resource allocations. As data centers continue to grow in size and complexity, effective capacity planning becomes increasingly important. DCIM solutions provide the tools needed to plan and optimize capacity, ensuring that data center resources are used efficiently and that there is sufficient capacity to meet future demand.



Quote for the day:

“Too many of us are not living our dreams because we are living our fears.” -- Les Brown