Showing posts with label DataOps. Show all posts
Showing posts with label DataOps. Show all posts

Daily Tech Digest - November 10, 2025


Quote for the day:

"You can only lead others where you yourself are willing to go." -- Lachlan McLean



CISOs must prove the business value of cyber — the right metrics can help

With a foundational ERM program, and by aligning metrics to business priorities, cybersecurity leaders can ultimately prove the value of the cyber security function. Useful metrics examples in business terms include maturity, compliance, risk, budget, business value streams, and status of SecDevOps (shifting left) adoption, Oberlaender explains. But how does a cybersecurity expert learn what’s important to the business? ... “Boards are faced with complex matters such as impact on interest rates, tariffs, stock price volatility, supply chain issues, profitability, and acquisitions. Then the CISO enters the boardroom with their MITRE Attack framework, patching metrics and NIST maturity models,” Hetner continues. “These metrics are not aligned to what the board is conditioned to reviewing.” ... Rather than just asking “are we secure?” business leaders are asking what metrics their cyber components are using to measure and quantify risk and how they’re spending against those risks. For CISO’s, this goes beyond measuring against frameworks such as NIST, listing a litany of security vulnerabilities they patched, or their mean time to response. “Instead, we can say, ‘This is our potential financial exposure’,” Nolen explains. “So now you’re talking dollars and cents rather than CVEs and technical scores that board members don’t care about. What they care about is the bottom line.” 


Feeding the AI beast, with some beauty

AI-driven growth is placing an unprecedented load on data centres worldwide, and India is poised to shoulder a large share of the incremental electricity, real estate, and cooling burden created by rising AI demand. The IEA has estimated a trajectory that AI is accelerating at a rapid pace. Under realistic scenarios, AI workloads alone could require on the order of 1–1.5 GW of continuous IT power—equivalent to 8.8–13 TWh annually—in India by 2030. This translates into a significant new draw on grids, water resources, and capex for cooling and power infrastructure. Recent analyses indicate that while AI’s share of data centre power today stands in the single-digit to low-teens range, it could climb to 20–40 per cent or more by 2030 in some scenarios, fundamentally reshaping the power-consumption profile of digital infrastructure. ... As data centres grow in scale, sustainability is becoming a competitive differentiator—and that’s where Life Cycle Assessments (LCAs) and Environmental Product Declarations (EPDs) play a critical role. An LCA is a systematic method for evaluating the total environmental impact of a product, process, or system across its entire life cycle. For a data centre, this spans both upstream (embodied) impacts—such as construction materials, IT equipment manufacturing, and cooling and power infrastructure including gensets—as well as operational impacts like electricity consumption. 


8 IT leadership tips for first-time CIOs

Generally speaking, the first three years can make or break your IT leadership career, given that digital leaders globally tend to stay at one company for just over that length of time on average, according to the 2025 Nash Squared Digital Leadership Report. CIOs looking to sidestep that statistic are taking intentional measures, ensuring they get early wins, and perhaps most importantly, not coming into their role with preconceived ideas about how to lead or assuming what worked in a past job can be replicated. ... The CTO of staffing and recruiting firm Kelly says that “building momentum, finding ways to get quick wins from the low hanging fruit” will help build credibility with the leadership team. Then, you can parlay those into bigger wins and avoid spinning out, he says. ... While making connections and establishing relationships is critical, Lewis stresses the importance of not rushing to change things right away when you’re new to the job. “Let it set for a while,” he says. ... This is especially true of midsize and larger midsize organizations “where the clarity of strategy and clarity of what’s important … isn’t always well documented and well thought out,” Rosenbaum says. Knowing the maturity of your organization is really important, he says. “Some CIO roles are just about keeping the lights on, making sure security is good at a lower level. As the company starts to mature, they start thinking about technology as an enabler, and to that end, they start having maybe a more unified technology strategy.”


Drata’s VP of Data on Rethinking Data Ops for the AI Era: Crawl, Walk, Run — Then Sprint

While GenAI may be the shiny new tool, Solomon makes it clear that foundational work around ingestion and transformation is far from trivial. “We live and die by making sure that all the data has been ingested in a fresh manner into the data warehouse,” he explains. He describes the “bread and butter” of the team: synchronizing thousands of MySQL databases from a single-tenant production architecture into the warehouse — closer to real-time. “We do a lot of activities with regard to the CDC pipeline, which is just like driving terabytes of data per day.” But the data team isn’t working in isolation. GTM executives return from conferences excited about GenAI. ... Rather than building fully-fledged pipelines from day one, the team prioritizes quick feedback loops — using sandboxes, cloud notebooks, or Streamlit apps to test hypotheses. Once business impact is validated, the team gradually introduces cost tracking, governance, and scalability. If a stakeholder’s hypothesis lacks merit, there is no point in building complex data pipelines, governance frameworks, or cost-tracking systems. This shift in mindset, he explains, is something many data teams are grappling with today. Traditionally, data teams were trained to focus on building scalable, robust pipelines from day one — often requiring significant upfront effort. But this often led to cost inefficiencies and delays.


Model Context Protocol Servers: Build or Buy?

"The tension lies in whether you have the sustained capacity to keep pace with protocols that are still being debated by their maintainers," said Rishi Bhargava, co-founder at Descope, a customer and agentic IAM platform. "Are you prepared to build the plane while it's flying, or would you rather upgrade a finished plane mid-flight?" ... "From a business perspective, the build versus buy decision for MCP servers boils down to strategic priorities and risk appetite," Jain said. Building MCP servers in-house gives you "complete control," but buying provides "speed, reliability, and lower operational burden," he said. But others think there's no reason to rush your decision. ... "Most companies shouldn't be doing either yet," he said, explaining that companies should first focus on the specific business goals they are trying to achieve, rather than on which existing applications they think should have AI features added. "Build when you have an actual AI application that requires custom data integration and you understand exactly what intelligence you're trying to deploy. If you're simply connecting ChatGPT to your CRM, you don't need MCP at all," Prywata said. ... "It is usually best to build [MCP servers] in-house when compliance, performance tuning, or data sovereignty are key priorities for the business," said Marcus McGehee, founder at The AI Consulting Lab. 


Every CIO Fails; The Smart Ones Admit It

There's a "hero CIO" myth deeply rooted in our mindset - the idea that you're the person who makes technology work, no matter what. Admitting failure feels like admitting incompetence, especially in boardrooms where few understand the complexity of IT. Organizational incentives also discourage openness. Many companies punish failure more than they reward learning. I've seen talented CIOs denied promotion because of a single delayed project, even when their broader portfolio delivered value. When institutional memory focuses on what went wrong rather than what was learned, people stop taking risks. The second factor is C-suite politics. In some environments, transparency becomes ammunition. Another team might use a project delay to justify requests for budget increases or to exert influence. And finally, CIOs worry about vendor perception, admitting setbacks could impact pricing, support or their reputation with partners. ... Build your transparency muscle in peacetime, not when something is on fire. By the time a crisis hits, it's too late to establish credibility. Make transparency habitual. Share work in progress, not just results. Celebrate learning, not perfection. Run "pre-mortems" where you assume a project failed and work backwards to identify what could go wrong. And when you make a mistake, own it publicly. The honesty earns you more trust than a polished explanation ever will.


6 proven lessons from the AI projects that broke before they scaled

In analyzing dozens of AI PoCs that sailed on through to full production use — or didn’t — six common pitfalls emerge. Interestingly, it’s not usually the quality of the technology but misaligned goals, poor planning or unrealistic expectations that caused failure. ... Define specific, measurable objectives upfront. Use SMART criteria. For example, aim for “reduce equipment downtime by 15% within six months” rather than a vague “make things better.” Document these goals and align stakeholders early to avoid scope creep. ... Invest in data quality over volume. Use tools like Pandas for preprocessing and Great Expectations for data validation to catch issues early. Conduct exploratory data analysis (EDA) with visualizations (like Seaborn) to spot outliers or inconsistencies. Clean data is worth more than terabytes of garbage. ... Start simple. Use straightforward algorithms like random forest or XGBoost from scikit-learn to establish a baseline. Only scale to complex models — TensorFlow-based long-short-term-memory (LSTM) networks — if the problem demands it. Prioritize explainability with tools like SHAP  to build trust with stakeholders. ... Plan for production from day one. Package models in Docker containers and deploy with Kubernetes for scalability. Use TensorFlow Serving or FastAPI for efficient inference. Monitor performance with Prometheus and Grafana to catch bottlenecks early. Test under realistic conditions to ensure reliability.


Andela CEO talks about the need for ‘borderless talent’ amid work visa limitation

Globally, three of four IT employers say they lack the tech talent they need, and the outlook will only get more dire as AI creates a demand for high-skilled specialists like data engineers, senior architects, and agentic orchestrators. Visa programs aren’t designed by the laws of supply and demand. They’re defined by policy makers and are updated infrequently. So, they’ll never truly be in sync with the needs of the labor market. ... Brilliant people exist around the world. It’s why they want to sponsor people for H-1B visas. But hiring outside of those traditional pathways — to work with a brilliant machine learning engineer from Cairo or São Paulo, for example — is…a long, painful process that takes months and is inaccessible to them. They don’t know that they can find the right partner, someone who has sorted this all out and vetted talent and developed compliance with global labor and tax laws, etc. Once they understand that those partners exist, the global workforce becomes instantly accessible to them. ... Technical hiring still feels like a gamble, even though software development is, relatively speaking, packed with deterministic skills. There are two main problems. One problem is the data problem. There’s not enough reliable data about what a job actually requires and what a worker is capable of doing. Today, we rely on resumes and job descriptions. 


The Overwhelm Epidemic: Why Resilience Begins with You

People have so much to do and not enough time. There’s nothing new with the phenomena of not enough time to do what needs to be done, but today it’s different. Today, it’s unique because this feeling of overwhelm has been continuously expanding since early 2020 as we experienced the pandemic. We’re being overwhelmed to an extent most people are not experienced to deal with.
For you in operational resilience, I believe self-care is more critical now than it has ever been. You are only able to help your clients and their systems be resilient to the extent you are taking care of yourself and are resilient. ... Most say something like, “I’m going to double down and focus on this. I’m going to work harder and spend as much time as needed, even if it means cutting into my already precious personal time.” They think working harder is the best approach, but here’s the thing—they are wrong.
When you are operating at high-stress levels, introducing more stress by doubling down and working harder, actually reduces your output. ... Bottom line, a thriving, elite mindset is the foundation of personal wellbeing and professional success. 
Turning to positive psychology, underlying Martin Seligman‘s model for human flourishing, are 24 positive character strengths. While more research is still needed, the research to date has concluded that of the 24, the best predictor of living a flourishing, thriving life is gratitude.


Ask a Data Ethicist: What Are the Impacts of AI on Creativity, Schools, and Industry?

Generally speaking, if the goal is to reduce the cost of labour by replacing it with equipment (capital – or AI), then assuming the AI tool replaces the labour in a way that is acceptable to drive the desired outputs the business could possibly drive more profit. So that might be construed as positive for the business. However, businesses exist in the bigger context of society. To take an extreme example, if a large section of the population loses their jobs, they can’t buy your products, and that could hurt your organization. It also puts more burdens on society for a social safety net, perhaps resulting in tax increases or some other impacts to business to pay for those services. ... I think it’s important to disclose the use of AI in a process. For video, audio or images – a symbol or some text to say “AI generated” can accomplish that goal. There is also watermarking that content which is a more technical method. For text, it’s trickier. I don’t think everyone needs to be told about every instance of a spellchecker (to use an extreme example) but if the whole thing is generated, then it is important to say that. This is where a policy can be helpful. For example, one might apply the 80/20 rule – if less than 20% is generated, perhaps it’s not necessary to disclose it. That said, there better not be any inaccuracies or errors in the content if you choose NOT to disclose it. See this case in Australia. This is an example of why I think disclosing, overall, is a good idea.

Daily Tech Digest - June 05, 2025


Quote for the day:

"The greatest accomplishment is not in never falling, but in rising again after you fall." -- Vince Lombardi


Your Recovery Timeline Is a Lie: Why They Fall Apart

Teams assume they can pull snapshots from S3 or recover databases from a backup tool. What they don’t account for is the reconfiguration time required to stitch everything back together. ... RTOs need to be redefined through the lens of operational reality and validated through regular, full-system DR rehearsals. This is where IaC and automation come in. By codifying all layers of your infrastructure — not just compute and storage, but IAM, networking, observability and external dependencies, too — you gain the ability to version, test and rehearse your recovery plans. Tools like Terraform, Helm, OpenTofu and Crossplane allow you to build immutable blueprints of your infrastructure, which can be automatically redeployed in disaster scenarios. But codification alone isn’t enough. Continuous testing is critical. Just as CI/CD pipelines validate application changes, DR validation pipelines should simulate failover scenarios, verify dependency restoration and track real mean time to recovery (MTTR) metrics over time. ... It’s also time to stop relying on aspirational RTOs and instead measure actual MTTR. It’s what matters when things go wrong, indicating how long it really takes to go from incident to resolution. Unlike RTOs, which are often set arbitrarily, MTTR is a tangible, trackable indicator of resilience.


The Dawn of Unified DataOps—From Fragmentation to Transformation

Data management has traditionally been the responsibility of IT, creating a disconnect between this function and the business departments that own and understand the data’s value. This separation has resulted in limited access to unified data across the organization, including the tools and processes to leverage it outside of IT. ... Organizations looking to embrace DataOps and transform their approach to data must start by creating agile DataOps teams that leverage software-oriented methodologies; investing in data management solutions that leverage DataOps and data mesh concepts; investing in scalable automation and integration; and cultivating a data-driven culture. Much like agile software teams, it’s critical to include product management, domain experts, test engineers, and data engineers. Approach delivery iteratively, incrementally delivering MVPs, testing, and improving capabilities and quality. ... Technology alone won’t solve data challenges. Truly transformative DataOps strategies align with unified teams that pair business users and subject matter experts with DataOps professionals, forming a culture where collaboration, accessibility, and transparency are at the core of decision making.


Redefining Cyber Value: Why Business Impact Should Lead the Security Conversation

A BVA brings clarity to that timeline. It identifies the exposures most likely to prolong an incident and estimates the cost of that delay based on both your industry and organizational profile. It also helps evaluate the return of preemptive controls. For example, IBM found that companies that deploy effective automation and AI-based remediation see breach costs drop by as much as $2.2 million. Some organizations hesitate to act when the value isn't clearly defined. That delay has a cost. A BVA should include a "cost of doing nothing" model that estimates the monthly loss a company takes on by leaving exposures unaddressed. We've found that for a large enterprise, that cost can exceed half a million dollars. ... There's no question about how well security teams are doing the work. The issue is that traditional metrics don't always show what their work means. Patch counts and tool coverage aren't what boards care about. They want to know what's actually being protected. A BVA helps connect the dots – showing how day-to-day security efforts help the business avoid losses, save time, and stay more resilient. It also makes hard conversations easier. Whether it's justifying a budget, walking the board through risk, or answering questions from insurers, a BVA gives security leaders something solid to point to. 


Fake REAL Ids Have Already Arrived, Here’s How to Protect Your Business

When the REAL ID Act of 2005 was introduced, it promised to strengthen national security by setting higher standards for state-issued IDs, especially when it came to air travel, access to federal buildings, and more. Since then, the roll-out of the REAL ID program has faced delays, but with an impending enforcement deadline, many are questioning if REAL IDs deliver the level of security intended. ... While the original aim was to prevent another 9/11-style attack, over 20 years later, the focus has shifted to protecting against identity theft and illegal immigration. The final deadline to get your REAL ID is now May 7th, 2025, owing in part to differing opinions and adoption rates state-by-state which has dragged enforcement on for two decades.  ... The delays and staggered adoption has given bad actors the chance to create templates for fraudulent REAL IDs. Businesses may incorrectly assume that an ID bearing a REAL ID star symbol are more likely to be legitimate, but as our data proves, this is not the case. REAL IDs can be faked just as easily as any other identity document, putting the onus on businesses to implement robust ID verification methods to ensure they don’t fall victim to ID fraud. ... AI-powered identity verification is one of the only ways to combat the increasing use of AI-powered criminal tools. 


How this 'FinOps for AI' certification can help you tackle surging AI costs

To really adopt AI into your enterprise, we're talking about costs that are orders of magnitude greater. Companies are turning to FinOps for help dealing with this. FinOps, a portmanteau of Finance and DevOps, combines financial management and collaborative, agile IT operations into a discipline to manage costs. It started as a way to get a handle on cloud pricing. FinOps' first job is to optimize cloud spending and align cloud costs with business objectives. ... Today, they're adding AI spending to their concerns. According to the FinOps Foundation, 63% of FinOps practitioners are already being asked to manage AI costs, a number expected to rise as AI innovation continues to surge. Mismanagement of these costs can not only erode business value but also stifle innovation. "FinOps teams are being asked to manage accelerating AI spend to allocate its cost, forecast its growth, and ultimately show its value back to the business," said Storment. "But the speed and complexity of the data make this a moving target, and cost overruns in AI can slow innovation when not well managed." Besides, Storment added, C-level executives are asking that painful question: "You're using this AI service and spending too much. Do you know what it's for?" 


Tackling Business Loneliness

Leaders who intentionally reach out to their employees do more than combat loneliness; they directly influence performance and business success. "To lead effectively, you need to lead with care. Because care creates connection. Connection fuels commitment. And commitment drives results. It's in those moments of real connection that collective brilliance is unlocked," she concludes. ... But it's not just women, with many men facing isolation in the workplace too, especially where a culture of 'put up and shut up' is frequently seen. Reflected in the high prevalence of suicide in the UK construction industry, it is essential that toxic cultures are dismantled and all employees feel valued and part of the team. "Whether they work on site or remotely, full time or part time, building an inclusive culture helps to ensure people do not experience prolonged loneliness or lack of connection. When we prioritise inclusion, everyone benefits," Allen concludes. ... Providing a safe, non-judgemental space for employees to discuss loneliness, things that are troubling them, and ways to manage any negative feelings is crucial. "This could be with a trusted line manager or colleague, but objective support from professional therapists and counsellors should also be accessible to prevent loneliness from manifesting into more serious issues," she emphasises. 


Revolutionizing Software Development: Agile, Shift-Left, and Cybersecurity Integration

While shift-left may cost more resources in the short term, in most cases, the long-term savings more than make up for the initial investment. Bugs discovered after a product release can cost up to 640 times more than those caught during development. In addition, late detection can increase the risk of fines from security breaches, as well as causing damage to a brand’s trust. Automation tools are the primary answer to these concerns and are at the core of what makes shift-left possible. The popular tech industry mantra, “automate everything,” continues to apply. Static analysis, dynamic analysis, and software composition analysis tools scan for known vulnerabilities and common bugs, producing instant feedback as code is first merged into development branches. ... Shift-left balances speed with quality. Performing regular checks on code as it is written reduces the likelihood that significant defects and vulnerabilities will surface after a release. Once software is out in the wild, the cost to fix issues is much higher and requires extensively more work than catching them in the early phases. Despite the advantages of shift-left, navigating the required cultural change can be a challenge. As such, it’s crucial for developers to be set up for success with effective tools and proper guidance.


Feeling Reassured by Your Cybersecurity Measures?

Organizations must pursue a data-driven approach that embraces comprehensive NHI management. This approach, combined with robust Secrets Security Management, can ensure that none of your non-human identities become security weak points. Remember, feeling reassured about your cybersecurity measures is not just about having security systems in place, but also about knowing how to manage them effectively. Effective NHI management will be a cornerstone in instilling peace of mind and enhancing security confidence. With these insights into the strategic importance of NHI management in promoting cybersecurity confidence, organizations can take a step closer to feeling reassured by their cybersecurity measures. ... Imagine a simple key, one that turns tumblers in the lock mechanism but isn’t alone in doing so. There are other keys that fit the same lock, and they all have the power to unlock the same door. This is similar to an NHI and its associated secret. There are numerous NHIs that could access the same system or part of a system, granted via their unique ‘Secret’. Now, here’s where it gets a little complex. ... Just as a busy airport needs security checkpoints to screen passengers and verify their credentials, a robust NHI management system is needed to accurately identify and manage all NHIs. 


How to Capitalize on Software Defined Storage, Securely and Compliantly

Because it fundamentally transforms data infrastructure, SDS is critical for technology executives to understand and capitalize on. It not only provides substantial cost savings and predictability and while reducing staff time required for managing physical hardware; SDS also makes companies much more agile and flexible in their business operations. For example, launching new initiatives or products that can start small and quickly scale is much easier with SDS. As a result, SDS does not just impact IT, it is a critical function across the enterprise. Software-defined storage in the cloud has brought major operational and cost benefits for enterprises. First, subscription business models enable buyers to make much more cost-conscious decisions and avoid wasting resources and usage. ... In addition, software-defined storage has also transformed technology management frameworks. SDS has enabled a move to agile DevOps, which includes real-time analytics resulting in faster iteration, less downtime and more efficient resource allocation. With real-time dashboards and alerts, organizations can now track key KPIs such as uptime and performance and react instantly. IT management can be more proactive by increasing storage or resource capacity when needed, rather than waiting for a crash to react.


The habits that set future-ready IT leaders apart

Constructive discomfort is the impetus to continuous learning, adaptability, agility, and anti-fragility. The concept of anti-fragile means designed for change. How do we build anti-fragile humans so they are unbreakable and prepared for tomorrow’s world, whatever it brings? We have these fault-tolerant designs where I can unplug a server and the system adapts and you don’t even know it. We want to create that same anti-fragility and fault tolerance in the human beings we train. We’re living in this ever-changing, accelerating VUCA [volatile, uncertain, complex, ambiguous] world, and there are two responses when you are presented with the unknown or the unexpected: You can freeze and be fearful and have it overcome you, or you can improvise, adapt, and overcome it by being a continuous learner and continuous adapter. I think resiliency in human beings is driven by this constructive discomfort, which creates a path to being continuous learners and continuous adapters. ... Strategic competence is knowing what hill to take, tactical competence is knowing how to take that hill safely, and technical competence is rolling up your sleeves and helping along the way. The leaders I admire have all three. The person who doesn’t have technical competence may set forth an objective and even chart the path to get there, but then they go have coffee. That leader is probably not going to do well. 

Daily Tech Digest - April 16, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


How to lead humans in the age of AI

Quiet the noise around AI and you will find the simple truth that the most crucial workplace capabilities remain deeply human. ... This human skills gap is even more urgent when Gen Z is factored in. They entered the workforce aligned with a shift to remote and hybrid environments, resulting in fewer opportunities to hone interpersonal skills through real-life interactions. This is not a critique of an entire generation, but rather an acknowledgment of a broad workplace challenge. And Gen Z is not alone in needing to strengthen communication across generational divides, but that is a topic for another day. ... Leaders must embrace their inner improviser. Yes, improvisation, like what you have watched on Whose Line Is It Anyway? Or the awkward performance your college roommate invited you to in that obscure college lounge. The skills of an improviser are a proven method for striving amidst uncertainty. Decades of experience at Second City Works and studies published by The Behavioral Scientist confirm the principles of improv equip us to handle change with agility, empathy, and resilience. ... Make listening intentional and visible. Respond with the phrase, “So what I’m hearing is,” followed by paraphrasing what you heard. Pose thoughtful questions that indicate your priority is understanding, not just replying. 


When companies merge, so do their cyber threats

Merging two companies means merging two security cultures. That is often harder than unifying tools or policies. While the technical side of post-M&A integration is important, it’s the human and procedural elements that often introduce the biggest risks. “When CloudSploit was acquired, one of the most underestimated challenges wasn’t technical, it was cultural,” said Josh Rosenthal, Holistic Customer Success Executive at REPlexus.com. “Connecting two companies securely is incredibly complex, even when the acquired company is much smaller.” Too often, the focus in M&A deals lands on surface-level assurances like SOC 2 certifications or recent penetration tests. While important, those are “table stakes,” Rosenthal noted. “They help, but they don’t address the real friction: mismatched security practices, vendor policies, and team behaviors. That’s where M&A cybersecurity risk really lives.” As AI accelerates the speed and scale of attacks, CISOs are under increasing pressure to ensure seamless integration. “Even a phishing attack targeting a vendor onboarding platform can introduce major vulnerabilities during the M&A process,” Rosenthal warned. To stay ahead of these risks, he said, smart security leaders need to dig deeper than documentation.


Measuring success in dataops, data governance, and data security

If you are on a data governance or security team, consider the metrics that CIOs, chief information security officers (CISOs), and chief data officers (CDOs) will consider when prioritizing investments and the types of initiatives to focus on. Amer Deeba, GVP of Proofpoint DSPM Group, says CIOs need to understand what percentage of their data is valuable or sensitive and quantify its importance to the business—whether it supports revenue, compliance, or innovation. “Metrics like time-to-insight, ROI from tools, cost savings from eliminating unused shadow data, or percentage of tools reducing data incidents are all good examples of metrics that tie back to clear value,” says Deeba. ... Dataops technical strategies include data pipelines to move data, data streaming for real-time data sources like IoT, and in-pipeline data quality automations. Using the reliability of water pipelines as an analogy is useful because no one wants pipeline blockages, leaky pipes, pressure drops, or dirty water from their plumbing systems. “The effectiveness of dataops can be measured by tracking the pipeline success-to-failure ratio and the time spent on data preparation,” says Sunil Kalra, practice head of data engineering at LatentView. “Comparing planned deployments with unplanned deployments needed to address issues can also provide insights into process efficiency.”


How Safe Is the Code You Don’t Write? The Risks of Third-Party Software

Open-source and commercial packages and public libraries accelerate innovation, drive down development costs, and have become the invisible scaffolding of the Internet. GitHub recently highlighted that 99% of all software projects use third-party components. But with great reuse comes great risk. Third-party code is a double-edged sword. On the one hand, it’s indispensable. On the other hand, it’s a potential liability. In our race to deliver software faster, we’ve created sprawling software supply chains with thousands of dependencies, many of which receive little scrutiny after the initial deployment. These dependencies often pull in other dependencies, each one potentially introducing outdated, vulnerable, or even malicious code into environments that power business-critical operations. ... The risk is real, so what do we do? We can start by treating third-party code with the same caution and scrutiny we apply to everything else that enters the production pipeline. This includes maintaining a living inventory of all third-party components across every application and monitoring their status to prescreen updates and catch suspicious changes. With so many ways for threats to hide, we can’t take anything on trust, so next comes actively checking for outdated or vulnerable components as well as new vulnerabilities introduced by third-party code. 


The AI Leadership Crisis: Why Chief AI Officers Are Failing (And How To Fix It)

Perhaps the most dangerous challenge facing CAIOs is the profound disconnect between expectations and reality. Many boards anticipate immediate, transformative results from AI initiatives – the digital equivalent of demanding harvest without sowing. AI transformation isn't a sprint; it's a marathon with hurdles. Meaningful implementation requires persistent investment in data infrastructure, skills development, and organizational change management. Yet CAIOs often face arbitrary deadlines that are disconnected from these realities. One manufacturing company I worked with expected their newly appointed CAIO to deliver $50 million in AI-driven cost savings within 12 months. When those unrealistic targets weren't met, support for the role evaporated – despite significant progress in building foundational capabilities. ... There are many potential risks of AI, from bias to privacy concerns, and the right level of governance is essential. CAIOs are typically tasked with ensuring responsible AI use yet frequently lack the authority to enforce guidelines across departments. This accountability-without-authority dilemma places CAIOs in an impossible position. They're responsible for AI ethics and risk management, but departmental leaders can ignore their guidance with minimal consequences.


OT security: how AI is both a threat and a protector

Burying one’s head in the sand, a favorite pastime among some OT personnel, no longer works. Security through obscurity is and remains a bad idea. Heinemeyer: “I’m not saying that everyone will be hacked, but it is increasingly likely these days.” Possibly, the ostrich policy has to do with, yes, the reporting on OT vulnerabilities, including by yours truly. Ancient protocols, ICS systems and PLCs with exploitable vulnerabilities are evidently risk factors. However, the people responsible for maintaining these systems at manufacturing and utility facilities know better than anyone that the actual exploits of these obscure systems are improbable. ... Given the increasing threat, is the new focus on common best practices enough? We have already concluded that vulnerabilities should not be judged solely on the CVSS score. They are an indication, certainly, but a combination of CVEs with middle-of-the-range scoring appears to have the most serious consequences. Heinemeyer says that the resolve to identify all vulnerabilities as the ultimate solution was well established from the 1990s to the 2010s. He says that in recent years, security professionals have realized that specific issues need to be prioritized, quantifying technical exploitability through various measurements (e.g., EPSS). 


In a Social Engineering Showdown: AI Takes Red Teams to the Mat

In a revelation that shouldn’t surprise, but still should alarm security professionals, AI has gotten much more proficient in social engineering. Back in the day, AI was 31% less effective than human beings in creating simulated phishing campaigns. But now, new research from Hoxhunt suggests that the game-changing technology’s phishing performance against elite human red teams has improved by 55%. ... Using AI offensively can raise legal and regulatory hackles related to privacy laws and ethical standards, Soroko adds, as well as creating a dependency risk. “Over-reliance on AI could diminish human expertise and intuition within cybersecurity teams.” But that doesn’t mean bad actors will win the day or get the best of cyber defenders. Instead, security teams could and should turn the tables on them. “The same capabilities that make AI an effective phishing engine can — and must — be used to defend against it,” says Avist. With an emphasis on “must.” ... It seems that tried and true basics are a good place to start. “Ensuring transparency, accountability and responsible use of AI in offensive cybersecurity is crucial,” Kowski. As with any aspect of tech and security, keeping AI models “up-to-date with the latest threat intelligence and attack techniques is also crucial,” he says. “Balancing AI capabilities with human expertise remains a key challenge.”


Optimizing CI/CD for Trust, Observability and Developer Well-Being

While speed is often cited as a key metric for CI/CD pipelines, the quality and actionability of the feedback provided are equally, if not more, important for developers. Jones, emphasizing the need for deep observability, stresses, “Don’t just tell me that the steps of the pipeline succeeded or failed, quantify that success or failure. Show me metrics on test coverage and show me trends and performance-related details. I want to see stack traces when things fail. I want to be able to trace key systems even if they aren’t related to code that I’ve changed because we have large complex architectures that involve a lot of interconnected capabilities that all need to work together.” This level of technical insight empowers developers to understand and resolve issues quickly, highlighting the importance of implementing comprehensive monitoring and logging within your CI/CD pipeline to provide developers with detailed insights into build, test, and deployment processes. And shifting feedback earlier in the development lifecycle serves everyone well. The key is shifting feedback earlier in the process, ensuring it is contextual, before code is merged. For example, running security scans at the pull request stage, rather than after deployment, ensures developers get actionable feedback while still in context. 


AI agents vs. agentic AI: What do enterprises want?

If AI and AI agents are application components, then they fit both into business processes and workflow. A business process is a flow, and these days at least part of that flow is the set of data exchanges among applications or their components—what we typically call a “workflow.” It’s common to think of the process of threading workflows through both applications and workers as a process separate from the applications themselves. Remember the “enterprise service bus”? That’s still what most enterprises prefer for business processes that involve AI. Get an AI agent that does something, give it the output of some prior step, and let it then create output for the step beyond it. The decision as to whether an AI agent is then “autonomous” is really made by whether its output goes to a human for review or is simply accepted and implemented. ... What enterprises like about their vision of an AI agent is that it’s possible to introduce AI into a business process without having AI take over the process or require the process be reshaped to accommodate AI. Tech adoption has long favored strategies that let you limit scope of impact, to control both cost and the level of disruption the technology creates. This favors having AI integrated with current applications, which is why enterprises have always thought of AI improvements to their business operation overall as being linked to incorporating AI into business analytics.


Liquid Cooling is ideal today, essential tomorrow, says HPE CTO

We’re moving from standard consumption levels—like 1 kilowatt per rack—to as high as 3 kilowatts or more. The challenge lies in provisioning that much power and doing it sustainably. Some estimates suggest that data centers, which currently account for about 1% of global power consumption, could rise to 5% if trends continue. This is why sustainability isn’t just a checkbox anymore—it’s a moral imperative. I often ask our customers: Who do you think the world belongs to? Most pause and reflect. My view is that we’re simply renting the world from our grandchildren. That thought should shape how we design infrastructure today. ... Air cooling works until a point. But as components become denser, with more transistors per chip, air struggles. You’d need to run fans faster and use more chilled air to dissipate heat, which is energy-intensive. Liquid, due to its higher thermal conductivity and density, absorbs and transfers heat much more efficiently. Some DLC systems use cold plates only on select components. Others use them across the board. There are hybrid solutions too, combining liquid and air. But full DLC systems, like ours, eliminate the need for fans altogether. ... Direct liquid cooling (DLC) is becoming essential as data centers support AI and HPC workloads that demand high performance and density. 

Daily Tech Digest - October 28, 2024

Generative AI isn’t coming for you — your reluctance to adopt it is

Faced with a growing to-do list and the new balancing act of returning from maternity leave to an expanded role leading public relations for a publicly-traded tech company, I opened Jasper AI. I admittedly smirked at some of the functionality. Changing the tone? Is this AI emotionally intelligent? Maybe more so than some former colleagues. I began on a blank screen. I started writing a few lines and asked the AI to complete the piece for me. I reveled in the schadenfreude of its failure. It summarized what I had written at the top of the document and just spit it out below. Ha! I had proven my superiority. I went back into my cave, denying myself and my organization the benefits of this transformative technology. The next time I used gen AI, something in me changed. I realized how much prompting matters. You can’t just type a few initial sentences and expect the AI to understand what you want. It still can’t read our minds (I think). But there are dozens of templates that the AI understands. For PR professionals, there are templates for press releases, media pitches, crisis communications statements, press kits and more.


What's Preventing CIOs From Achieving Their AI Goals?

"While no CIO wants to be left behind, they are also prudent about their AI adoption journeys and how they implement the technology for business in a responsible manner," said Dr. Jai Ganesh, chief product officer, HARMAN International. "While there are many business use cases, enterprises are prioritizing these on a must-have immediately to implement basis." ... He also oversees AI implementation across his company. Technology leaders say it will take at least two to three years before AI becomes mainstream across the enterprise. Rakesh Jayaprakash, chief analytics evangelist at ManageEngine, told ISMG that we would start to see "very tangible results" at a larger scale in another one or two years. "Tangible results" refer to commoditization of AI, which accelerates the ROI, he said. "While there is a lot of hype around AI now, the true value comes when the organizations are able to see the outcomes," Jayaprakash said. "Right now, many organizations jump in with very high expectations of what is possible through AI, because we've started to use tools such as ChatGPT to accomplish very simple tasks. But when it comes to organization-level use cases, those are a little more complex."


Bridging the Data Gap: The Role of Industrial DataOps in Digital Transformation

One of the main issues faced by organizations is the lack of context in industrial data. Unlike IT systems, where data is typically well-defined and structured, data from industrial environments often lacks the necessary context to be useful. For example, a temperature reading from a manufacturing machine might be labeled simply as “temperature sensor 1,” leaving operators to guess its relevance without proper identification. This lack of contextualization—when applied to thousands of data points across multiple facilities— Is a major barrier to advanced analytics and digitalization programs. ... By implementing Industrial DataOps, organizations can address this gap by contextualizing data as close to the source as possible—ideally at the edge of the network. This approach empowers operators who have tribal knowledge of the data and its sources to deliver ready-to-use data to IT and line of business users in their organization. Decisions become faster and more informed. The ultimate goal is to transform raw data into valuable insights that drive operational improvements. ... As organizations adopt Industrial DataOps, they unlock the potential for rapid innovation. With a solid data management framework in place, OT teams can easily explore new use cases and validate hypotheses. 


Ensuring AI-readiness of Data Is a Long-term Commitment

Data becomes an intellectual property when one enters the world of GenAI, and it is the way with which one can customize algorithms to reflect the brand voice and deliver great client services. Keeping the scenario in mind, Birkhead states that modernizing data and ensuring its AI-readiness is a long-term commitment. While organizations can make incremental progress year after year, building an analytic factory to produce AI models that support the business takes strategy, investment, and an enabling leadership team. Highlighting JPMC’s data strategy, Birkhead states that the components include data design principles, operating models, principles around platforms, tooling, and capabilities. Additionally, talent, governance, data, and AI ethics also come into play, but the ultimate goal is to have incredibly high-quality data that is self-describing and understandable by both humans and machines. From Birkhead’s standpoint, to be AI-ready with data, organizations have to get data to a state where a data scientist, user, or AI researcher can go into a marketplace and understand everything about the data.


Business Etiquette Classes Boom as People Relearn How to Act at Work

Workers who had substantial professional experience before the pandemic, including managers and executives, still need help adapting to hybrid and remote work, Senning said. He has been coaching leaders on best practices for such things as communicating through your calendar and deciding whether to call, text or use Slack to reach an employee. stablishing etiquette for video meetings has also been a challenge for many firms, he notes. Bad behavior in virtual meetings has occasionally made headlines in recent years, such as the backlash against Vishal Garg, CEO of the mortgage lending firm Better.com, for announcing mass layoffs over Zoom ahead of the holidays in 2021. "If I had a magic button that I could push that could get people to treat video meetings with 50 percent of the same level of professionalism they treat an in-person meeting, I would make a lot of HR, personnel managers, and executives very, very happy," Senning said. Tech companies also are paying for etiquette and professionalism training for their workers, especially if they're bringing in employees who have never worked in person before, according to Crystal Bailey, director of the Etiquette Institute of Washington, who counts Amazon among her clients.


Exploring the Power of AI in Software Development - Part 1: Processes

AI holds the power to significantly enhance the requirement analysis and planning processes at the early stages of the software development life cycle (SDLC). It can analyze massive amounts of data in order to identify user needs and preferences, allowing developers to make informed decisions about features and functionality. ... AI can also look at coding rates per user story within an app architecture context and allow Product Managers to better determine project timelines and resource needs. In doing so, they can more accurately predict the risk-reward of time-to-market versus high quality for every release, knowing that no software will be 100% defect-free. ... With AI, you have a pair programmer who has infinite patience. Someone who will not judge you for seemingly "stupid" questions. Having this kind of support can increase an engineer's capabilities and productivity. So often as a junior engineer, I was afraid to ask the senior engineers on my team questions because I thought I should know the answer. Engineers can use AI without the worry of judgment, so no question is stupid, no answer should be known.


How AI is Shaping the Future of Product Development

Product testing and iteration processes are also being revolutionized by AI, which results in shorter development cycles and better product outcomes as well. While tried and true testing methods can work well, they often have long cycles or may miss problems. Quiet contrary to traditional testing, AI-driven automation suggests a new degree of efficiency and accuracy. AI tools for early-stage testing makes it possible to discover issues quickly and try out potential applications, which lowers the demand on manual resources spent in validating components or debugging. Not just that, AI's ability to analyze code bases comprehensively provides targeted insights for ongoing improvements. By integrating AI into testing processes, businesses can accelerate development cycles, reduce costs, and deliver products that better align with user expectations. ... By embedding AI into their growth strategies, companies can benefit in numerous ways. It allows for more targeted and personalized experience to be delivered, subsequently personalizing the products or services provided by companies. Such a custom-built solution not only enhances user experience but also helps create brand loyalty. Additionally, AI allows companies to have data-driven decision making that facilitates strategic planning and execution.


From Safety to Innovation: How AI Safety Institutes Inform AI Governance

According to the report, this “first wave” of AISIs has three common characteristics:Safety-focus: The first wave of AISIs was informed by the Bletchley AI Safety Summit, which declared that “AI should be designed, developed, deployed, and used in a manner that is safe, in such a way as to be human-centric, trustworthy, and responsible.” These institutes are particularly concerned with mitigating abuse and safeguarding frontier AI models. Government-led: These AISIs are governmental institutions, providing them with the “authority, legitimacy, and resources” needed to address AI safety issues. Their governmental status helps them access leading AI models to run evaluations, and importantly, it gives them greater leverage in negotiating with companies unwilling to comply. Technical: AISIs are focused on attracting technical experts to ensure an evidence-based approach to AI safety. The report also points out some key ways AISIs are unique. For one, AISIs are not a “catch-all” entity to tackle the complex and ever-evolving AI governance landscape. They are also relatively free of the bureaucracy commonly associated with governmental agencies. This may be due to the fact that these institutes have very little regulatory authority and focus more on establishing best practices and conducting safety evaluations to inform responsible AI development.


Current Top Trends in Data Analytics

One of the most impactful data analytics trends right now is the integration of AI and machine learning (ML) into analytics frameworks, observes Anil Inamdar, global head of data services at data monitoring and management firm Instaclustr by NetApp, an online interview. "We are seeing the emergence of a new data 4.0 era, which builds on previous shifts that focused on automation, competitive analytics, and digital transformation," Inamdar states. "This distinct new phase leverages AI/ML and generative AI to significantly enhance data analytics capabilities," he says. While the transformative potential is now here for the taking, enterprises must carefully strategize across several key areas. ... Data governance should be a top concern for all enterprises. "If it isn't yours, you’re heading for a world of hurt," warns Kris Moniz, national data and analytics practice lead for business and technology advisory firm Centric Consulting, via email. Data governance dictates the rules under which data should be managed, Moniz says. "It doesn’t just do this by determining who gets access to what," he notes. "It also does it by defining what your data is, setting processes that can guarantee its quality, building frameworks that align disparate systems across common domains, and setting standards for common data that all systems should consume."


Effective Data Mesh Begins Wtih Robust Data Governance

When implemented correctly, removing the dependency on centralised systems and IT teams can truly transform the way organisations operate. However, introducing a data mesh can also raise fears and concerns relating to storage, duplication, management, and compliance, all of which must be addressed if it is to succeed. With decentralised data management, it’s also critical that everyone follows the same stringent set of rules, particularly regarding the creation, storage, and protection of data. If not, issues will quickly arise. Additionally, if any team leaders or department heads put their own tools or processes in place, the results may cause far more problems than they solve. Trusting individuals to stick to data guidelines is too risky. Instead, adherence should be enforced in a way that ensures standards are followed, without impacting agility or frustrating users. This may sound impractical, but a computational governance approach can impose the necessary restrictions, while at the same time accelerating project delivery. Naturally, not everyone will be quick (or keen) to adjust, but with additional support and training even the most reluctant individuals can learn how to adopt a more entrepreneurial mindset.



Quote for the day:

"Trust is the lubrication that makes it possible for organizations to work." -- Warren G. Bennis

Daily Tech Digest - May 07, 2024

How generative AI is redefining data analytics 

When applied to analytics, generative AI:Streamlines the foundational data stages of ELT: Predictive algorithms are applied to optimize data extraction, intelligently organize data during loading, and transform data with automated schema recognition and normalization techniques. Accelerates data preparation through enrichment and data quality: AI algorithms predict and fill in missing values, identify and integrate external data sources to enrich the data set, while advanced pattern recognition and anomaly detection ensure data accuracy and consistency. Enhances analysis of data, such as geospatial and autoML: Mapping and spatial analysis through AI-generated models enable accurate interpretation of geographical data, while automated selection, tuning, and validation of machine learning models increase the efficiency and accuracy of predictive analytics. Elevates the final stage of analytics, reporting: Custom, generative AI-powered applications provide interactive data visualizations and analytics tailored to specific business needs. 


Open-source or closed-source AI? Data quality and adaptability matter more

Licensing and usage terms of services matter in that they dictate how you use a particular model — and even what you use it for. Even so, getting caught up in the closed vs. open zealotry is shortsighted at a time when 70% of CEOs surveyed expect gen AI to significantly alter the way their companies create, deliver and capture value over the next three years, according to PwC. Rather, you should focus on the quality of your data. After all, data will be your competitive differentiator — not the model. ... Experimenting with different model types and sizes to suit your use cases is a critical part of the trial-and-error process. Right-sizing, or deploying the most appropriate model sizes for your business, is more crucial. Do you require a broad, boil-the-ocean approach that spans as much data as possible to build a digital assistant with encyclopedic knowledge? A large LLM cultivating hundreds of billions of data points may work well. ... Of course, the gen AI model landscape is ever evolving. Future models will look and function differently than those of today. Regardless of your choices, with the right partner you can turn your data ocean into a wellspring of insights.


Tips for Building a Platform Engineering Discipline That Lasts

A great platform engineer is defined both by their ability to create infrastructure and advocate for and guide others (which is where communication skills come in) — especially in the platforms that are maturing today. As far as hard skills go, the platform engineer should have experience in cloud platforms, CI/CD, IaC, security, and automation. Other roles you’ll need include a product owner to manage platform stakeholders and track KPIs. Our 2024 State of DevOps report found that 70% of respondents said a product manager was important to the platform team – 52% of whom called the role “critical”. To avoid complexity and scaling issues, you’ll also need architects with the vision and skills to help the platform engineering team design and build it. Infrastructure as code (IaC) is version control for your infrastructure. It makes infrastructure human-readable, auditable, repeatable, scalable, and securable. IaC also lets disparate teams — developers, operations, and QA — review, collaborate, iterate, and maintain infrastructure code simultaneously. 


What Is the American Privacy Rights Act, and Who Supports It?

The APRA ostensibly is about data, but AI is also covered a bit. Companies must evaluate their “covered algorithms” before deploying it and provide that evaluation to the FTC and the public. Companies must also adhere to people’s request to opt out of the use of any algorithm related to housing, employment, education, health care, insurance, credit, or access to places of public accommodation. The APRA would be enforced by a new bureau operating under the Federal Trade Commission (FTC). State attorneys general would also be able to enforce the new law. It would also allow individuals to file private lawsuits against companies that violate the law. There are several important exceptions in the APRA. For instance, small businesses, defined as having less than $40 million in annual revenue or collecting data on 200,000 or fewer individuals (as long as they’re not in the data-selling business themselves), are exempt from the APRA’s requirements. Governmental agencies and organizations working for them are also exempt, in addition to non-profit organizations whose main purpose is fraud-fighting, as well. 


Empowering Users: Embracing Product-Centric Strategies in SaaS

A non-negotiable requirement for a SaaS product to succeed with a product-centric strategy is for it to be intuitively designed with minimal friction and a focus on delivering value as quickly as possible. This is not a set-and-forget task demanding a profound understanding of the critical user journey and ruthlessly prioritizing friction and pain point elimination instead of just plastering feature promotions through in- and out-of-product interventions. However, this cannot be done if teams don’t use data analytics or prioritize the voice of the customer through feedback loops to further product development and work towards building a loved and delightful product. A great example of a PLG pioneer is Figma.  ... On the other hand, adopting a product-led growth approach requires fundamental organizational shifts. The success of PLG requires a combined, multidisciplinary team dedicated to continuous improvement and adaptation of the product to support both new customer acquisition as well as retention and growth.


6 tips to implement security gamification effectively

Gamification leverages elements of traditional gaming, online and offline, to boost engagement and investment in the learning process. Points, badges, and leaderboards reward successful actions, fostering a sense of achievement and friendly competition. Engaging scenarios and challenges simulate real-world threats, allowing trainees to apply knowledge practically. Difficulty levels keep learners engaged, while immediate feedback on decisions solidifies learning and highlights areas for improvement. Effective implementation hinges on transparency, simplicity, and a level playing field. A central dashboard that displays the same security data for everyone keeps things simple, fostering a shared understanding of progress. ... Personalized challenges help ensure engagement. New security teams might focus on mastering foundational tasks like vulnerability scans, while seasoned teams tackle advanced challenges like reducing time for response to critical security events. This keeps everyone motivated and learning, while offering continuous improvement for the entire team.


Rethinking ‘Big Data’ — and the rift between business and data ops

Just as data scientists need to think more like businesspeople, so too must businesspeople think more like data scientists. This goes to the issue of occupational identity. Executives need to expand their professional identities to include data. Data professionals need to recognize that DI (changes in information) do not necessarily equate to DB (changes in behavior). Going forward data professionals are not just in the information/insight delivery business, they are in the “create insight that drives value creating behavior” business. The portfolio of tools available now have democratized the practice of data science. One no longer needs to be a math genius or coding phenom to extract value from data — see Becoming a Data Head: How to Think, Speak, and Understand Data Science, Statistics, and Machine Learning by Alex J. Gutman, Jordan Goldmeier. ... Executives need ready access to data professionals to guide their use of data power tools. Data professionals need to be embedded in the business rather than quarantined in specialized data gulags.


The Technical Product Owner

There is a risk that the technical Product Owner or product manager might no longer focus on the “why” but start interfering with the “how,” which is the Developers’ domain. Otherwise, a technical Product Owner might help the Developers understand the long-term business implications of technical decisions made today. ... A technical Product Owner would be highly beneficial when the product involves complex technical requirements or relies heavily on specific technologies. For example, in projects involving intricate software architecture or specialized domain knowledge, a technical Product Owner can provide valuable guidance, facilitate more informed decision-making, and effectively communicate with the Developers. This deep technical understanding can lead to better solutions, improved product quality, and increased customer satisfaction, especially in industries with critical technical expertise, such as software development or engineering. 


The digital transformation divide in Europe’s banking industry

Europe’s digital divide is a product of typical characteristics: internet connectivity, digital literacy, the availability of smartphones and digital devices. Disparities in broadband access in urban and rural communities remain stubbornly persistent. According to Eurostat, around 21% of rural households in the European Union do not have access to broadband internet, compared to only 2% of urban households. In Romania, which ranked lowest on the EU’s Digital Economy and Society Index in 2022, the market is dominated by incumbent banks. Only 69.1% of adults hold a bank account, pointing to low levels of financial literacy and inclusion – underpinned by a preference for a cash economy. In contrast, the UK has a rate of over 60% fintech adoption growth according to data from Tipalti, and Lithuania has established itself as an impressive fintech ecosystem backed by the nation’s central bank. However, it is too simplistic to reduce the digital divide to regional disparities, as the starker differences lie between countries themselves.


Why AMTD Is the Key to Stopping Zero-Day Attacks

AMTD technology uses polymorphism to create a randomized, dynamic runtime memory environment. Deployable on endpoints and servers, this polymorphism ability creates a prevention-focused solution that constantly moves system resources while leaving decoy traps in their place. What occurs next is that threats see these decoy resources where real ones should be and end up trapped. For users, it’s business as usual because as they don't notice any difference—system performance is unaffected while security teams gain a new layer of preventative telemetry. Today, more and more companies are turning to AMTD technologies to defeat zero days. In fact, industry analysts like Gartner suggest that AMTD technology is paving the way for a new era of cyber defense possibilities. That’s because instead of trying to detect zero-day compromise, these technologies prevent exploits from deploying in the first place. Against zero-day attacks, this is the only defensive approach organizations can rely on.



Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas