Showing posts with label digitalization. Show all posts
Showing posts with label digitalization. Show all posts

Daily Tech Digest - December 28, 2025


Quote for the day:

"The best reason to start an organization is to make meaning; to create a product or service to make the world a better place." -- Guy Kawasaki



PIN It to Win It: India’s digital address revolution

DIGIPIN is a nationwide geo-coded addressing system developed by the Department of Posts in collaboration with IIT Hyderabad. It divides India into approximately 4m x 4m grids and assigns each grid a unique 10-character alphanumeric code based on latitude and longitude coordinates. The ability of DIGIPIN to function as a persistent, interoperable location identifier across India’s dispersed public and private networks is what gives it its real power. Unlike normal addresses, which depend on textual descriptions, a DIGIPIN condenses the geo-coordinates, administrative metadata and unique spatial identifiers into a 10-character alphanumeric string. Because of which, DIGIPIN is readable by machines, compatible with maps and unaffected by changes in naming conventions. When combined with systems like Aadhaar (identity), UPI (payments), ULPIN (land) and UPIC (property), DIGIPIN can enable seamless KYC validation, last-mile delivery automation, digital land titling and geographic analytics. ... For DIGIPIN to become the default address format in India, it has to succeed across three critical dimensions: A 10-character code might be accurate, but is it memorable? For a busy delivery rider or a rural farmer, remembering and sharing it must be easier than reciting a landmark-heavy address. The code must be accepted across platforms – Aadhaar, land registries, GST, KYC forms, food delivery apps and banks. 


Deepfakes leveled up in 2025 – here’s what’s coming next

Over the course of 2025, deepfakes improved dramatically. AI-generated faces, voices and full-body performances that mimic real people increased in quality far beyond what even many experts expected would be the case just a few years ago. They were also increasingly used to deceive people. For many everyday scenarios — especially low-resolution video calls and media shared on social media platforms — their realism is now high enough to reliably fool nonexpert viewers. In practical terms, synthetic media have become indistinguishable from authentic recordings for ordinary people and, in some cases, even for institutions. And this surge is not limited to quality. ... Looking forward, the trajectory for next year is clear: Deepfakes are moving toward real-time synthesis that can produce videos that closely resemble the nuances of a human’s appearance, making it easier for them to evade detection systems. The frontier is shifting from static visual realism to temporal and behavioral coherence: models that generate live or near-live content rather than pre-rendered clips. ... As these capabilities mature, the perceptual gap between synthetic and authentic human media will continue to narrow. The meaningful line of defense will shift away from human judgment. Instead, it will depend on infrastructure-level protections. These include secure provenance such as media signed cryptographically, and AI content tools that use the Coalition for Content Provenance and Authenticity specifications.


Your Core Is Being Retired. Now What?

Eventually, all financial institutions will find themselves in the position of voluntarily or involuntarily going through a core migration. The stock market hammered one of the largest core processing companies in the world recently, effectively admitting publicly what most of the industry has known for years: They were more concerned about financial engineering of the share price than they were about product engineering a better outcome for their clients. Unfortunately, the market also learned recently that the largest core processing provider will soon be making some big changes and consolidating many of its core systems. It’s hard to imagine how a software company can effectively support and maintain this many diverse core platforms – and the rationale behind this decision seems obvious and needed. However, this is an incredibly risky inflection point for banks and credit unions on platforms targeted for retirement. The hope and bet is that most clients will be incentivized to migrate to one of the remaining cores. ... The retirement of your core is an opportunity to rethink the foundation of your institution’s future. While no core conversion is easy, those who approach it strategically, armed with data, foresight, and the right partners, can turn a forced migration into a competitive advantage. The next generation of cores promises greater flexibility, integration and scalability, but only for institutions that negotiate wisely, plan deliberately, and take control of their own timelines before someone else does.


Whether AI is a bubble or revolution, how does software survive?

Bubble or not, AI has certainly made some waves, and everyone is looking to find the right strategy. It’s already caused a great deal of disruption—good and bad—among software companies large and small. The speed at which the technology has moved from its coming out party, has been stunning; costs have dropped, hardware and software have improved, and the mediocre version of many jobs can be replicated in a chat window. It’s only going to continue. “AI is positioned to continuously disrupt itself, said McConnell. “It's going to be a constant disruption. If that's true, then all of the dollars going to companies today are at risk because those companies may be disrupted by some new technology that's just around the corner.” First up on the list of disruption targets: startups. If you’re looking to get from zero to market fit, you don’t need to build the same kind of team like you used to. “Think about the ratios between how many engineers there are to salespeople,” said Tunguz. “We knew what those were for 10 or 15 years, and now none of those ratios actually hold anymore. If we are really are in a position that a single person can have the productivity of 25, management teams look very different. Hiring looks extremely different.” That’s not to say there won’t be a need for real human coders. We’ve seen how badly the vibe coding entrepreneurs get dunked on when they put their shoddy apps in front of a merciless internet. 


Why Windows Just Became Disruptible in the Agentic OS Era

Identity is where the cracks show early. Traditional Windows environments assume a human logging into a device, launching applications, and accessing resources under their account. Entra ID and Active Directory groups, role-based access control across Microsoft 365, and Conditional Access policies all grew out of that pattern. An agentic environment forces a different set of questions. Who is authenticated when an agent books a conference room, issues a purchase order draft, or requests a sensitive dataset? How should policy cope with agents that mix personal and organizational context, or that act for multiple managers across overlapping projects? What happens when an internal agent needs to negotiate with an external agent that belongs to a partner or supplier? ... Agentic systems improve as they see more behavior. Early customers who allow their interactions, decisions, and corrections to be observed become de facto trainers for the platform. That creates a race to capture training data, not just market share. The same is true for the user experience. How people “vibe reengineer” processes isn’t optimized yet. The vendor that gets that experience right will empower AI-savvy users in new ways, and deep knowledge about those emerging processes will be hard to copy. It is likely, however, that more than one approach will emerge, which will set up the next round of competition.


SaaS attacks surge as boards turn to AI for defence

"SaaS security, together with concerns around the secure use of AI moved from a niche security initiative to a boardroom imperative. The 2025 Verizon Data Breach Investigations Report (DBIR) called out a doubling of breaches involving third-party applications stemming from misconfigured SaaS platforms and unauthorized integrations, particularly those exploited by threat actors through scanning and credential stuffing," said Soby, Co-founder and Chief Technology Officer, AppOmni. ... "Security technologies leveraging AI agents have the potential to move the industry closer towards security operations autonomy. In fact, we're seeing innovative advancements there, especially in the development of SOC AI agents," said Ruzzi, Director of AI, AppOmni. She highlighted the Model Context Protocol, an emerging technical standard, as a mechanism that can act as a universal adapter between AI models and external systems. ... She warned that AI agents still face challenges when they deal with large and complex data sets. "But organizations need to look beyond the AI hype of agents to implement the technology in a way that will be truly useful for them. Handling large volumes of complex data still presents a challenge here. Agents are most useful when assigned to perform a targeted task that handles smaller volumes of simpler data," said Ruzzi.


Why CIOs must lead AI experimentation, not just govern it

The role of IT leadership is undergoing a profound transformation. We were once the gatekeepers of technology. Then came SaaS, which began to democratize technology access, putting powerful tools directly into the hands of employees. AI represents an even more significant shift. It can feel intimidating, and as leaders, we have a crucial responsibility to demystify it and make it accessible. Much like the dot.com boom, we're witnessing a transformative moment, and IT leaders must harness this potential to drive innovation. ... The key to successful AI adoption is fostering a culture of learning and experimentation. Employees at all levels, whether developers or non-developers, executives or individual contributors, must have the opportunity to get their hands on AI tools and understand how they work. Some companies are having employees train AI models and learn prompt engineering, which is a fantastic way to remove the mystery and show people how AI truly functions. We’re encouraging our own teams to write prompts and train chatbots, aiming for AI to become a true copilot in their daily tasks. Think of it as akin to an athlete who trains consistently, refining their skills to achieve better results. That’s the feeling we want our employees to have with AI — a tool that makes their work faster, better and, ultimately, more meaningful and joyful. My own mother’s relationship with her voice assistant, which has become an integral part of her life, is a simple reminder of how seamlessly technology can integrate when it’s genuinely helpful.


AI, fraud and market timing drive biometrics consolidation in 2025 … and maybe 2026

Fraud has overwhelmed organizations of all kinds, and Verley emphasizes the degree to which this has pulled enterprise teams and market players in adjacent areas together. AI has contributed to this wave of fraud in several important ways. The barrier to entry has been lowered, and forgeries are now scalable in a way cybercriminals could only have dreamed of just a few years ago. The proliferation of generative AI tools has also changed the state of the art in biometric liveness detection, with injection attack detection (IAD) now table stakes for secure remote user onboarding the way presentation attack detection (PAD) has been for the last several years. ... Reducing fraud is part of the motivation behind the EU Digital Identity Wallet, which launches in the year ahead. By tying digital IDs to government-issued biometric documents with electronic chips. “That’s going to mean a huge uptick in onboarding people to issue them these new credentials that are going to be big in identity verification, and that’s going to be the best way to do that,” Goode says. At the same time, businesses that had no choice but to pay for identity services during pandemic now have more choice, Verley says. So providers are emphasizing fraud protection to justify the value of their products. ... Uncertainty is a central feature of the AI market landscape, and Goode notes the possibility that if predictions of the AI market popping like a bubble in 2026 come true, restricted credit availability “could put a damper on acquisitions.”


Why Strategic Planning Without CIOs Fail

For large IT projects exceeding $15 million in initial budget, the research found average cost overruns of 45%, value delivery 56% below predictions, and 17% of projects becoming black swan events with cost overruns exceeding 200%, sometimes threatening organizational survival. These outcomes are not random. BCG 2024 research surveying global C-suite executives across 25 industries found that organizations including technology leaders from the start of strategic initiatives achieve 154% higher success rates than those that do not. When CIOs enter after critical decisions are made, organizations discover mid-execution that constraints render promised features impossible, integration requirements multiply beyond projections, and vendor capabilities fail to match sales promises. Direct project costs pale beside the accumulated burden of technical debt. ... Gartner’s 2025 CIO Survey (released October 2024), which surveyed over 3,100 CIOs and technology executives, revealed that only 48% of digital initiatives meet or exceed their business outcome targets. However, Digital Vanguard CIOs, who co-own digital delivery with business leaders, achieve a 71% success rate. That 48% improvement represents the difference between coin-flip odds and a reliable strategic advantage. Failed transformations do not merely waste money. They consume organizational capacity that could deliver value elsewhere.


Top 3 Reasons Why Data Governance Strategies Fail

Clearly, data governance is policy, not a solution. It nests within any organization that has deployed business analytics as part of its overall strategy – in fact, one of the reasons for data governance failure is that it is not being aligned with an enterprise’s business strategy. Governance is about ensuring the proper implementation of business rules and controls around your organization’s data. It involves the wholehearted participation of all company departments, especially IT and business. Any attempt to run it in a vacuum or silo means it’s imminently doomed. ... A well-thought-out data governance plan must have a governing body and a defined set of procedures with a plan to execute them. To begin with, one has to identify the custodians of an enterprise’s data assets. Accountability is key here. The policy must determine who in the system is responsible for various aspects of the data, including quality, accessibility, and consistency. Then come to the processes. A set of standards and procedures must be defined and developed for how data is stored, backed up, and protected. To be left out, a good data governance plan must also include an audit process to ensure compliance with government regulations. ... If an Enterprise does not know where it’s headed with its data governance plan, reflected in black and white, it’s bound to stutter. Things like targets achieved, dollars saved, and risks mitigated need to be measured and recorded.

Daily Tech Digest - December 24, 2025


Quote for the day:

"The only person you are destined to become is the person you decide to be." -- Ralph Waldo Emerson



When is an AI agent not really an agent?

If you believe today’s marketing, everything is an “AI agent.” A basic workflow worker? An agent. A single large language model (LLM) behind a thin UI wrapper? An agent. A smarter chatbot with a few tools integrated? Definitely an agent. The issue isn’t that these systems are useless. Many are valuable. The problem is that calling almost anything an agent blurs an important architectural and risk distinction. ... If a vendor knows its system is mainly a deterministic workflow plus LLM calls but markets it as an autonomous, goal-seeking agent, buyers are misled not just about branding but also about the system’s actual behavior and risk. That type of misrepresentation creates very real consequences. Executives may assume they are buying capabilities that can operate with minimal human oversight when, in reality, they are procuring brittle systems that will require substantial supervision and rework. Boards may approve investments on the belief that they are leaping ahead in AI maturity, when they are really just building another layer of technical and operational debt. Risk, compliance, and security teams may under-specify controls because they misunderstand what the system can and cannot do. ... demand evidence instead of demos. Polished demos are easy to fake, but architecture diagrams, evaluation methods, failure modes, and documented limitations are harder to counterfeit. If a vendor can’t clearly explain how their agents reason, plan, act, and recover, that should raise suspicion. 


Five identity-driven shifts reshaping enterprise security in 2026

Organizations that continue to treat identity as a static access problem will fall behind attackers who exploit AI-powered automation, credential abuse, and identity sprawl. The enterprises that succeed will be those that re-architect identity security as a continuous, data-aware control plane, one built to govern humans, machines, and AI with the same rigor, visibility, and accountability. ... Unlike traditional shadow IT, shadow AI is both more powerful and more dangerous. Employees can deploy advanced models trained on sensitive company data, and these tools often store or transmit privileged credentials, API keys, and service tokens without oversight. Even sanctioned AI tools become risky when improperly configured or connected to internal workflows. ... With AI-driven automation, sophisticated playbooks previously reserved for top-tier nation-states become accessible to countries, and non-state actors, with far fewer resources. This levels the playing field and expands the number of threat actors capable of meaningful, identity-focused cyber aggression. In 2026, expect more geopolitical disruptions driven by identity warfare, synthetic information, and AI-enabled critical infrastructure targeting. ... Machine identities have become the primary source of privilege misuse, and their growth shows no sign of slowing. As AI-driven automation accelerates and IoT ecosystems proliferate, organizations will hit a governance tipping point.2026 will force security teams to confront a tough reality. Identity-first security can’t stop with humans. 


Implementing NIS2 — without getting bogged down in red tape

NIS2 essentially requires three things: concrete security measures; processes and guidelines for managing these measures; and robust evidence that they work in practice. ... Therefore, two levels are crucial for NIS2: the technical measures and the evidence that they are effective. This is precisely where the transformation of recent years becomes apparent. Previously, concepts, measures, and specifications for software and IT infrastructures were predominantly documented in text form. ... The second area that NIS2 and the new Implementing Regulation 2024/2690 for digital services are enshrining in law is vulnerability management in the company’s own code and supply chain. This requires regular vulnerability scans, procedures for assessment and prioritization, timely remediation of critical vulnerabilities, and regulated vulnerability handling and — where necessary — coordinated vulnerability disclosure. Cloud and SaaS providers also face additional supply chain obligations ... The third area where NIS2 quickly becomes a paper tiger is the combination of monitoring, incident response, and the new reporting requirements. The directive sets clear deadlines: early warning within 24 hours, a structured report after 72 hours, and a final report no later than one month. ... NIS2 forces companies to explicitly define their security measures, processes, and documentation. This is inconvenient — ​​especially for organizations that have previously operated largely on an ad-hoc basis. 


Rethinking Anomaly Detection for Resilient Enterprise IT

Being armed with this knowledge is only the first step, though. The next challenge is detecting anomalies consistently and accurately in complex environments. This task is becoming increasingly difficult as IT environments undergo continuous digital transformation, shift towards hybrid-cloud setups, and rely on legacy systems that are well past their prime. These challenges introduce dynamic data, pushing IT leaders to rethink their anomaly detection processes. ... By incorporating seasonal patterns, user behavior, and workload types, adaptive baselines filter out the noise and highlight genuine deviations. Another factor to integrate is the overall context of a situation. Metrics rarely operate in isolation. During planned deployment, it would be anticipated for a spike in network latency. This same spike would be seen completely differently if it were to occur during steady operations. By combining telemetry with contextual signals, anomaly detection systems can separate the expected from the unexpected. ... Anomaly detection is meant to strengthen operations and improve overall resilience. However, it is not capable of delivering on this promise when teams are constantly swimming through the seas of generated alerts. By contextually and comprehensively adopting new approaches to the variety of anomalies, systems can identify root causes, uniformly correct systemic failures created from multiple metrics points, and mitigate the risk of outages.


Bridging the Gap: Engineering Resilience in Hybrid Environments (DR, Failover, and Chaos)

Resilience in a hybrid environment isn't just about preventing failure; it’s about enduring it. It requires moving beyond hope as a strategy and embracing a tripartite approach: Robust Disaster Recovery (DR), automated Failover, and proactive Chaos Engineering. ... Disaster Recovery is your insurance policy for catastrophic events. It is the process of regaining access to data and infrastructure after a significant outage—a hurricane hitting your primary data center, a massive ransomware attack, or a prolonged regional cloud failure. ... While DR handles catastrophes, Failover handles the everyday hiccups. Failover is the (ideally automatic) process of switching to a redundant or standby system upon the failure of the primary system, mostly automatic. Failover mechanisms in a hybrid environment ensure immediate operational continuity by automatically switching workloads from a failed primary system (on-premises or cloud) to a redundant secondary system with minimal downtime. This requires coordinating recovery across cloud and on-premises platforms. ... Chaos engineering is a proactive discipline used to stress-test systems by intentionally introducing controlled failures to identify weaknesses and build resilience. In hybrid environments—which combine on-premises infrastructure with cloud resources—this practice is essential for navigating the added complexity and ensuring continuous reliability across diverse platforms.


Should CIOs rethink the IT roadmap?

As technology consultancy West Monroe states: “You don’t need bigger plans — you need faster moves.” This is a fitting mantra for IT roadmap development today. CIOs should ask themselves where the most likely business and technology plan disrupters are going to come from. ... Understandably, CIOs can only develop future-facing technology roadmaps with what they see at a present point in time. However, they do have the ability to improve the quality of their roadmaps by reviewing and revising these plans more often. ... CIOs should revisit IT roadmaps quarterly at a minimum. If roadmaps must be altered, CIOs should communicate to their CEOs, boards, and C-level peers what’s happening and why. In this way, no one will be surprised when adjustments must be made. As CIOs get more engaged with lines of business, they can also show how technology changes are going to affect company operations and finances before these changes happen ... Equally important is emphasizing that a seismic change in technology roadmap direction could impact budgets. For instance, if AI-driven security threats begin to impact company AI and general systems, IT will need AI-ready tools and skills to defend and to mitigate these threats. ... Now is the time for CIOs to transform the IT roadmap into a more malleable and responsive document that can accommodate the disruptive changes in business and technology that companies are likely to experience.


Why shadow IT is a growing security concern for data centre teams

It is essential to recognise that employees use shadow IT to get their work done efficiently, not to deliberately create security risks. This should be front of mind for any IT teams and data centre consultants involved in infrastructure design and security provision. Finding blame or taking an approach that blocks everything does not work. A more effective way to address shadow IT use is to invest for the long term in a culture which promotes IT as a partner to workplace productivity, not something which is a hindrance. Ideally, this demands buy-in from senior management. Although it falls to IT teams to provide people with the tools for their jobs, providing choice, listening to employees’ requests and offering prompt solutions, will encourage the transparency so much needed for IT to analyse usage patterns, identify potential issues and address minor issues before they grow into costly problems. Importantly, this goes a long way towards embracing new technologies and avoiding employees turning to shadow IT that they find and use without approval. ... While IT teams are focused on gaining visibility and control over the software, hardware and services gainfully used by their organisations, they also need to be careful not to stifle innovation. It is here that data centre operators can share ideas on ways to best achieve this balance, as there is never going to be one model that suits every business. 


From Digitalization to Intelligence: How AI Is Redefining Enterprise Workflows

In the AI economy, digitalization plays another important role—turning paper documents into data suitable for LLM engines. This will become increasingly important as more sites restrict crawlers or require licensing, which reduces the usable pool of data. A 2024 report from the nonprofit watchdog Epoch AI projected that large language models (LLMs) could run out of fresh, human-generated training data as soon as 2026. Companies that rely purely on publicly available crawl data for continuous scaling likely will encounter diminishing returns. To avoid the looming publicly accessed data shortage, enterprises will need to use their digitized documents and corporate data to fine‐tune models for domain specific tasks rather than rely only on generic web data. Intelligent capture technologies can now recognize document types, extract key entities, and validate information automatically. Once digitized, this data flows directly into enterprise systems where AI models can uncover insights or predict outcomes. ... Automation isn’t just about doing more with less; it’s about learning from every action. Each scan, transaction, or decision strengthens the feedback loop that powers enterprise AI systems. The organizations recognizing this shift early will outpace competitors that still treat data capture as a back-office function. The winners will be those that turn the last mile of digitalization into the first mile of intelligence.


Boardrooms demand tougher AI returns & stronger data

Budget scrutiny is increasing as wider economic conditions remain uncertain and as organisations review early generative AI experiments. "AI investment is no longer about FOMO. Boards and CFOs want answers about what's working, where it's paying off, and why it matters now. 2026 will be a year of focus. Flashy experiments and perpetual pilots will lose funding. Projects that deliver measurable outcomes will move to the center of the roadmap," said McKee, CEO, Ataccama. ... "For years people have predicted that AI will hollow out data teams, yet the closer you get to real deployments, the harder that story is to believe. Once agents take over the repetitive work of querying, cleaning, documenting, and validating data, the cost of generating an insight will begin falling toward zero. And when the cost of something useful drops, demand rises. We've seen this pattern with steam engines, banking, spreadsheets, and cloud compute, and data will follow the same curve," said Keyser. Keyser said easier access to data and analysis is likely to change behaviours in business units that have not traditionally engaged with central data groups. He expects a rise in AI-literate staff across operational functions and a larger need for oversight. ... The organizations that adopt agents will discover something counterintuitive. They won't end up with fewer data workers, but more. This is Jevons paradox applied to analytics. When insight becomes easier, curiosity will expand and decision-making will accelerate.


The Blind Spots Created by Shadow AI Are Bigger Than You Think

If you think it’s the same as the old “shadow IT” problem with different branding, you’re wrong. Shadow AI is faster, harder to detect, and far more entangled with your intellectual property and data flows than any consumer SaaS tool ever was. ... Shadow AI is not malicious in nature; in fact, the intent is almost always to improve productivity or convenience. Unfortunately, the impact is a major increase in unplanned data exposure, untracked model interactions, and blind spots across your attack surface. ... Most AI tools don’t clearly explain how long they keep your data. Some retrain on what you enter, others store prompts forever for debugging, and a few had almost no limits at all. That means your sensitive info could be copied, stored, reused for training, or even show up later to people it shouldn’t. Ask Samsung, whose internal code found its way into a public model’s responses after an engineer uploaded it. They banned AI instantly. Hardly the most strategic solution, and definitely not the last time you’ll see this happen. ... Shadow AI bypasses Identity controls, DLP controls, SASE boundaries, Cloud logging, and Sanctioned inference gateways. All that “AI data exhaust” ends up scattered across a slew of unsanctioned tools and locations. Your exposure assessments are, by default, incomplete because you can’t protect what you can’t see. ... Shadow AI has changed from an occasional or unusual instance case to everyday behavior happening across all departments.

Daily Tech Digest - December 07, 2023

Top 5 Trends in Cloud Native Software Testing in 2023

As digital threats become more sophisticated, there’s a heightened focus on security testing, particularly among large enterprises. This trend is about integrating security protocols right from the initial stages of development. Tools that do SAST and DAST are becoming essentials in testing workflows. ... The TestOps trend integrates testing into the continuous development cycle, echoing the collaborative and automated ethos of DevOps. TestOps focuses on enhancing communication between developers, testers, and operations, ensuring continuous testing and quicker feedback loops. It leverages real-time analytics to refine testing strategies, ultimately boosting software quality and efficiency. Extending the principles of DevOps, GitOps uses Git repositories as the backbone for managing infrastructure and application configurations, including testing frameworks. ... The rise of ephemeral test environments is a game-changer. These environments are created on demand and are short-lived, providing a cost-effective way to test applications in a controlled environment that closely mirrors production


Dump C++ and in Rust you should trust, Five Eyes agencies urge

Microsoft, CISA observes in its guidance, has acknowledged that about 70 percent of its bugs (CVEs) are memory safety vulnerabilities, with Google confirming a similar figure for its Chromium project and that 67 percent of zero-day vulnerabilities in 2021 were memory safety flaws. Given that, CISA is advising that organizations move away from C/C++ because, even with safety training (and ongoing efforts to harden C/C++ code), developers still make mistakes. "While training can reduce the number of vulnerabilities a coder might introduce, given how pervasive memory safety defects are, it is almost inevitable that memory safety vulnerabilities will still occur," CISA argues. ... Bjarne Stroustrup, creator of C++, has defended the language, arguing that ISO-compliant C++ can provide type and memory safety, given appropriate tooling, and that Rust code can be implemented in a way that's unsafe. But that message hasn't done much to tarnish the appeal of Rust and other memory safe languages. CISA suggests that developers look to C#, Go, Java, Python, Rust, and Swift for memory safe code.


How the insider has become the no.1 threat

For the organisation, this means the insider threat has not only become more pronounced but harder to counter. It requires effective management on two fronts in terms of managing the remote/mobile workforce and dissuading employees from swapping cash for credentials/data. For these reasons, businesses need to reinforce the security culture through staff awareness training and step up their policy enforcement, in addition to applying technical controls to ensure data is protected at all times. That’s not what is happening today. The Apricorn survey found only 14% of businesses control access to systems and data when allowing employees to use their own equipment remotely, a huge drop from 41% in 2022. Nearly a quarter require employees to seek approval to use their own devices, but they do not then apply any controls once that approval has been granted. Even more concerning is that the number of organisations that don’t require approval or apply any controls has doubled over the past year. This indicates a hands-off approach that assumes a level of implicit trust, directly contributing to the problem of the insider threat.


WestRock CIDO Amir Kazmi on building resiliency

There are three leadership principles I would highlight that help build resilience in the team. First is recognizing the pace of change and responding to the impact it has on a team. It’s not getting slower; it’s getting faster. One of the behaviors that can help your team is to ‘explain the why.’ Set the context before the content behind what needs to be accomplished so we’re all on the same journey. Second is recognizing that we have to instill a learning and growth mindset in the culture, in the leadership, and in the fabric of what we’re trying to achieve. Many businesses are shifting their business models from product to service, and as leaders, it’s important to build a level of learning in that journey for your teams. One of the leaders that I admire and have learned from is John Chambers, who has said, ‘It’s all about speed of innovation and changing the way you do business.’ If we don’t reimagine ourselves, we will get disrupted. Third is transparency around what the key priorities are — because not everything can be a priority — and then creating flexibility around those priorities and how we get to the outcomes.


AI Governance in India: Aspirations and Apprehensions

While India’s stance on AI regulation has sometimes appeared to waver, it is steadily working towards establishing a clear regulatory approach and AI governance mechanism, especially as the country assumes a more prominent role in the area of AI-related international cooperation. AI-enabled harms and security threats exist at all three levels of the AI stack: At the hardware level, there are vulnerabilities in the physical infrastructure of AI systems. At a foundational model level, there are concerns around the use of inappropriate datasets, data poisoning, and issues related to data collection, storage, and consent. At the application level, there are threats to sensitive and confidential information as well as the proliferation of capability-enhancing tools among malicious actors. Therefore, while the governance of the tech stack is a priority, governance of the organisations developing AI solutions, or the people behind the technology, could also be productive. Even as democratisation has made AI more accessible, assigning responsibility and defining accountability for the operation of AI systems have become more difficult. 


Liability Fears Damaging CISO Role, Says Former Uber CISO

The average person on the street would think it reasonable that a CISO should be responsible for all aspects of an organization’s security, Sullivan acknowledged. However, the reality is the CISO role is unique among executive positions. “The CISO is fighting an uphill fight every day in their job. They’re begging for resources, they’re trying to get the rest of the company to slow down and think about the things they care about,” he noted. “Our job is different from everybody else’s. When you’re the executive responsible for security, you are the only executive who has active adversaries outside your organization trying to destroy you,” he added. ... Despite the growing personal risks for CISOs, Sullivan emphasized that “we should not run away from the situation,” adding that “if we do, we’ll miss a huge opportunity.” He believes there is a fundamental shift coming in terms of the regulation that’s on the horizon in cybersecurity, which will force organizations to revise how they approach security, and current security professionals must be to facilitate this change.


Middle East CISOs Fear Disruptive Cloud Breach

Data sovereignty regulations and de-globalization trends, for example, have led to the deployment of multi-cloud infrastructures that can support regional regulations and business mandates, according to the March research report, The Future of Cloud Security in the Middle East. "You will have your own cloud service provider within each country and already countries are adopting that culture — be it in the UAE or Saudi Arabia or any other country in the region," Rajesh Yadla, director head of information security for Al Hilal Bank, stated in that report. "The reason is to make sure that the cloud service providers are compliant with all these regulations." Business and government leaders have taken cybersecurity seriously, however, with security the top factor in choosing a cloud provider, with 43% of companies prioritizing security, compared to 19% prioritizing cost, according to the report. Both Saudi Arabia and the UAE rank in the top 10 nations for cybersecurity, as measured by the Global Cybersecurity Index 2020, the most recent cybersecurity rankings of countries across the globe compiled by the International Telecommunication Union (ITU).


Parenting in the Digital Age: A Guide to Choosing Tech-Enabled Preschools

In recent years, technology integration in preschoolers’ education has become a game-changer in delivering personalised learning. By making education more fun and interactive by using a robust arsenal – AR applications, ERP apps and much more, teachers and parents have been able to tap into the receptivity of young minds, paving the way for both cognitive and emotional development. Augmented Reality (AR) being an interactive experience assimilates the real world and computer-generated content. Additionally, it stimulates multiple sensory modalities, making a successful mark in opening up new avenues in preschool education. By allowing young learners to immerse in realistic experiences, AR elevates the learning process with computer simulations, 3D virtualisation, etc. making it enhanced, effective and evocative. Departing from the traditional chalkboard and chart paper educational approach for preschoolers, parents have seismically shifted their preference to a tech-integrated curriculum. The augment of AR technology for early childhood learning brings forth a layer of interactive and engaging experiences. 


Cyber Strategic Ambivalence Will Hit A Tipping Point In 2024

There are indications that technological advances, geopolitics, social influences, and other externalities are creating the conditions for what Thomas Kuhn coined the “paradigm shift” (his 1962 book, The Structure of Scientific Revolutions, described the dynamics and the framework by which structural change emerges). The conditions for change that will result in a paradigm shift are the breadth, types and severity of attacks that are ongoing and will likely increase in 2024. The assessed global cyberattack losses in 2023 amount to $8 trillion, which is larger than any national economy except for the US and China! In other words, the collective black market – the illicit profits generated from cybercrime – is a larger economy than Germany or Japan or India. That is a look at the problem in monetary terms. Cyberattacks are now regularly compromising critical infrastructure, which places public safety at risk. In May of 2023, Denmark’s critical infrastructure network experienced the largest cyberattack ever, which was highly coordinated and could have resulted in power outages. 


How server makers are surfing the AI wave

There appears to be strong demand for high performance computing (HPC) hardware that includes graphics processing units (GPUs) for accelerating the performance of workloads and GPU-based servers. ... There is a growing realisation among many businesses that the hyperscalers are behind the curve with regards to supporting the intellectual property of their GenAI users. This is opening up opportunities for specialist GPU cloud providers to offer AI acceleration in a way that allows customers to train foundational AI models based on their own data. Some organisations are also likely to buy and run private cloud servers configured as GPU farms for AI acceleration, fuelling the significant growth in demand for GPU-equipped servers from the major hardware providers. HPE recently announced an expanded strategic collaboration with Nvidia to offer enterprise computing for GenAI. HPE said the co-engineered, pre-configured AI tuning and inferencing hardware and software platform enables enterprises of any size to quickly customise foundation models using private data and deploy production applications anywhere.



Quote for the day:

''Your most unhappy customers are your greatest source of learning.'' -- Bill Gates

Daily Tech Digest - April 27, 2022

Think of search as the application platform, not just a feature

As a developer, the decisions you make today in how you implement search will either set you up to prosper, or block your future use cases and ability to capture this fast-evolving world of vector representation and multi-modal information retrieval. One severely blocking mindset is relying on SQL LIKE queries. This old relational database approach is a dead end for delivering search in your application platform. LIKE queries simply don’t match the capabilities or features built into Lucene or other modern search engines. They’re also detrimental to the performance of your operational workload, leading to the over-use of resources through greedy quantifiers. These are fossils—artifacts of SQL from 60 or 70 years ago, which is like a few dozen millennia in application development. Another common architectural pitfall is proprietary search engines that force you to replicate all of your application data to the search engine when you really only need the searchable fields.


What Is a Data Reliability Engineer, and Do You Really Need One?

It’s still early days for this developing field, but companies like DoorDash, Disney Streaming Services, and Equifax are already starting to hire data reliability engineers. The most important job for a data reliability engineer is to ensure high-quality data is readily available across the organization and trustworthy. When broken data pipelines strike (because they will at one point or another), data reliability engineers should be the first to discover data quality issues. However, that’s not always the case. Insufficient data is first discovered downstream in dashboards and reports instead of in the pipeline – or even before. Since data is rarely ever in its ideal, perfectly reliable state, the data reliability engineer is more often tasked with putting the tooling (like data observability platforms and testing) and processes (like CI/CD) in place to ensure that when issues happen, they’re quickly resolved. The impact is conveyed to those who need to know. Much like site reliability engineers are a natural extension of the software engineering team, data reliability engineers are an extension of the data and analytics team.


Mitigating Insider Security Threats in Healthcare

Some security experts say that risks involving insiders and cloud-based data are often misjudged by entities. "One of the biggest mistakes entities make when shifting to the cloud is to think that the cloud is a panacea for their security challenges and that security is now totally in the hands of the cloud service," says privacy and cybersecurity attorney Erik Weinick of the law firm Otterbourg PC. "Even entities that are fully cloud-based must be responsible for their own privacy and cybersecurity, and threat actors can just as readily lock users out of the cloud as they can from an office-based server if they are able to capitalize on vulnerabilities such as weak user passwords or system architecture that allows all users to have access to all of an entity's data, as opposed to just what that user needs to perform their specific job function," he says. Dave Bailey, vice president of security services as privacy and security consultancy CynergisTek, says that when entities assess threats to data within the cloud, it is incredibly important to develop and maintain solid security practices, including continuous monitoring.


Is cybersecurity talent shortage a myth?

It is a combination of things but yes, in part technology is to blame. Vendors have made the operation of the technologies they designed an afterthought. These technologies were never made to be operated efficiently. There is also a certain fixation to technologies that just don’t offer any value yet we keep putting a lot of work towards them, like SIEMs. Unfortunately, many technologies are built upon legacy systems. This means that they carry those systems’ weaknesses and suboptimal features that were adapted from other intended purposes. For example, many people still manage alerts using cumbersome SIEMs that were originally intended to be log accumulators. The alternative is ‘first principles’ design, where the technology is developed with a particular purpose in mind. Some vendors assume that their operators are the elites of the IT world, with the highest qualifications, extensive experience, and deep knowledge into every piece of adjoining or integrating technology. Placing high barriers to entry on new technologies—time-consuming qualifications or poorly-delivered, expensive courses—contributes to the self-imposed talent shortage.


How Manufacturers Can Avoid Data Silos

The first and most important step you can take to break down silos is to develop policies for governing the data. Data governance helps to ensure that everyone in a factory understands how the data should be used, accessed, and shared. Having these policies in place will help prevent silos from forming in the first place. According to Gartner data, 87 percent of manufacturers have minimal business intelligence and analytics expertise. The research found these firms less likely to have a robust data governance strategy and more prone to data silos. Data governance efforts that improve synergy and maximize data effectiveness can help manufacturing companies reduce data silos. ... Another way to break down data silos is to cultivate a culture of collaboration. Encourage employees to share information and knowledge across departments. When everyone is working together, it will be easier to avoid duplication of effort and wasted time. To break down data silos, manufacturers should move to a culture that encourages collaboration and communication from the top down.


Top 7 metaverse tech strategy do's and don'ts

Like any other technology project, a metaverse project should support overall business strategy. Although the metaverse is generating a lot of buzz right now, it is only a tool, said Valentin Cogels, expert partner and head of EMEA product and experience innovation at Bain & Company. "I don't think that anyone should think in terms of metaverse strategy; they should think about a customer strategy and then think about what tools they should use," Cogels said. "If the metaverse is one tool they should consider, that's fine." Approaching with a business goals-first approach also helps to refine the available choices, which leaders can then use to build out use cases. Serving the business goals and customers you already have is critical, said Edward Wagoner, CIO of digital at JLL Technologies, the property technology division of commercial real estate services company JLL Inc., headquartered in Chicago. "When you take that approach, it makes it a lot easier to think how [the products and services you deliver] would change if [you] could make it an immersive experience," he said.


Digital begins in the boardroom

Boards need to guard against the default of having a “technology expert” that everyone turns to whenever a digital-related issue comes onto the agenda. Rather than being a collection of individual experts, everyone on a board should have a good strategic understanding of all important areas of business – finance, sales and marketing, customer, supply chain, digital. The best boards are a group of generalists – each with certain specialisms – who can discuss issues widely and interactively, not a series of experts who take the floor in turn while everyone else listens passively. There is much that can be done to raise levels of digital awareness among executives and non-executives. Training courses, webinars, self-learning online – all these should be on the agenda. But one of the most effective ways is having experts, whether internal or external, come to board meetings to run insight sessions on key topics. For some specialist committees, such as the audit and/or risk committees, bringing in outside consultants – on cyber security, for example – is another important feature.


4 reasons diverse engineering teams drive innovation

Diverse teams can also help prevent embarrassing and troubling situations and outcomes. Many companies these days are keen to infuse their products and platforms with artificial intelligence. But as we’ve seen, AI can go terribly wrong if a diverse group of people doesn’t curate and label the training datasets. A diverse team of data scientists can recognize biased datasets and take steps to correct them before people are harmed. Bias is a challenge that applies to all technology. If a specific class of people – whether it’s white men, Asian women, LGBTQ+ people, or other – is solely responsible for developing a technology or a solution, they will likely build to their own experiences. But what if that technology is meant for a broader population? Certainly, people who have not been historically under-represented in technology are also important, but the intersection of perspectives is critical. A diverse group of developers will ensure you don’t miss critical elements. My team once developed a website for a client, for example, and we were pleased and proud of our work. But when a colleague with low vision tested it, we realized it was problematic.


Bringing Shadow IT Into the Light

IT teams are understaffed and overwhelmed after the sharp increase in support demands caused by the pandemic, says Rich Waldron, CEO, and co-founder of Tray.io, a low-code automation company. “Research suggests the average IT team has a project backlog of 3-12 months, a significant challenge as IT also faces renewed demands for strategic projects such as digital transformation and improved information security,” Waldron says. There’s also the matter of employee retention during the Great Resignation hinging in part on the quality of the tech on the job. “Data shows that 42% of millennials are more likely to quit their jobs if the technology is sub-par,” says Uri Haramati, co-founder and CEO at Torii, a SaaS management provider. “Shadow IT also removes some burden from the IT department. Since employees often know what tools are best for their particular jobs, IT doesn’t have to devote as much time searching for and evaluating apps, or even purchasing them,” Haramati adds. In an age when speed, innovation and agility are essential, locking everything down instead just isn’t going to cut it. For better or worse shadow IT is here to stay.


Log4j Attack Surface Remains Massive

"There are probably a lot of servers running these applications on internal networks and hence not visible publicly through Shodan," Perkal says. "We must assume that there are also proprietary applications as well as commercial products still running vulnerable versions of Log4j." Significantly, all the exposed open source components contained a significant number of additional vulnerabilities that were unrelated to Log4j. On average, half of the vulnerabilities were disclosed prior to 2020 but were still present in the "latest" version of the open source components, he says. Rezilion's analysis showed that in many cases when open source components were patched, it took more than 100 days for the patched version to become available via platforms like Docker Hub. Nicolai Thorndahl, head of professional services at Logpoint, says flaw detection continues to be a challenge for many organizations because while Log4j is used for logging in many applications, the providers of software don't always disclose its presence in software notes. 



Quote for the day:

"Go as far as you can see; when you get there, you'll be able to see farther." -- J. P. Morgan

Daily Tech Digest - March 13, 2022

3 leadership lessons from Log4Shell

APIs add to an organization’s attack surface, so it’s important to know where they are used. Gartner estimates that roughly 90% of web apps will soon have more of their exposed attack surface area accounted for by APIs as opposed to their own interfaces. Indeed, in 2021, malicious traffic around APIs grew by nearly 350%. Despite these trends, API use only continues to grow. Gone are the days of monolithic applications. Modern enterprise web applications are built with coupled services that communicate through APIs galore, and each component is a target for attackers if left unchecked. Pair that widened attack surface with the insane growth of APIs, and the need for strong API security is clear. Organizations need to cover their entire attack surface by implementing automated and accurate scans via user interfaces and APIs if they want to eliminate potential weak spots before they become problems. Put simply, security debt is an organization’s total inventory of unresolved security issues. These issues have a wide variety of sources, including knowledge gaps, inadequate tooling or cutting corners during testing in the race to market.


Increasing security for single page applications (SPAs)

First and foremost, the frontend code operates in an insecure environment: a user’s browser. SPAs often possess a refresh token that grants offline access to a user’s resources and can obtain new access tokens without interaction from the user. As these credentials are readable by the SPA, they are vulnerable to cross-site scripting (XSS) attacks, which can have dangerous repercussions such as attackers gaining access to users’ personal data and functionalities not normally accessible through the user interface. As the online data pool grows and hackers become more sophisticated, security must be taken seriously to protect customers’ information and businesses’ reputations. However, designing security solutions for SPAs is no easy feat. As well as the strongest browser security and simple and reliable code, software developers must consider how to deliver the best user experience – wrapping all this into a solution that can be deployed anywhere. The SPA’s web content can be deployed to many global locations via a Content Delivery Network (CDN). Web content is then close geographically to all users so that web downloads are faster.


AI and CSR can strengthen anti-corruption efforts

In addition to CSR, there has been much excitement about the future of AI in anti-corruption work. AI has increasingly become a part of our daily lives, from digital assistants like Siri and Alexa, to self-driving cars like Teslas and ride-hailing applications like Uber. Given that AI has been useful in so many ventures, anti-corruption scholars are eager to apply it to their work. In fact, AI has been described as “the next frontier in anti-corruption.” ... However, AI and anti-corruption discussions so far have mostly focused on governmental efforts to address corporate corruption, not on companies using AI to mitigate corporate corruption — even though many of them already use AI to maximize profit. In the corporate anti-corruption context, AI can provide companies with a proposed investment destinations or transactions and help detect corruption risks in such ventures and improve due diligence processes. AI can also provide more information for yearly anti-corruption policy reviews and assist in designing training based on AI analyses of company processes, reports and operations.


Data Mesh: The Balancing Act of Centralization and Decentralization

Another concept, which resonates well is data products. Managing and providing data as a product isn't the extreme of dumping raw data, which would require all consuming teams to perform repeatable work on data quality and compatibility issues. It also isn't the extreme of building an integration layer, using one (enterprise) canonical data model with strong conformation from all teams. Data product design is a nuanced approach of taking data from your (complex) operational and analytical systems and turning it into read-optimized versions for organizational-wide consumption. This approach of data product design comes with lots of best practices like aligning your data products with the language of your domain, setting clear interoperability standards for fast consumption, capturing it directly from the source of creation, addressing time-variant and non-volatile concerns, encapsulating metadata for security, ensuring discoverability, and so on. More of these best practices you can find here.


Role of the Metaverse, AI and digitalization — Are brands and consumers prepared for the new era?

The metaverse has a mostly positive impact on brands, but there are still some loopholes that worry them. For instance, the French champagne Armand de Brignac has recently filed trademark applications to register the appearance of its gold bottle packaging in virtual reality, augmented reality, video, social media and the web. Like this, many brands have established identities when it comes to product and packaging. Since this alternate reality is a fairly new territory to brands, it is difficult for them to gauge if a product or its packaging has distinctiveness outside the metaverse. Even if it does, it is unclear whether those rights will be sufficient to claim infringement inside the metaverse. Among other concerns, the metaverse also brings issues regarding privacy and security risks to light. Being an online-enabled space, it is uncertain whether consumers and brands may face new and unknown privacy and authenticity issues. The rise of the metaverse is just like that of the internet – former Amazon strategist Matthew Ball estimates that by 2027, every company will be a gaming company, implying that the metaverse will soon become a normal part of people’s lives.


Data Protection In The EU: New GDPR Right Of Access Guidelines

The right of access has a broad scope: in addition to basic personal data, according to the EDPB it also includes, for example, subjective notes made during a job application, a history of internet and search engine activity, etc. Unless explicitly stated otherwise, the request must be understood to relate to all personal data relating to the data subject, but the controller may ask the data subject to specify the request if it processes a large amount of data. This applies to each request: if a data subject makes more than one request, it would therefore not be sufficient to provide access only to the changes since the last request. Even data that may have been processed incorrectly or unlawfully should be provided. Data that has already been deleted, for example in accordance with a retention policy, and is therefore no longer available to the controller, does not need to be provided. Specifically, the controller will have to search all IT systems and other archives for personal data using search criteria that reflect the way the information is structured, for example, name and customer or employee number.


Even 'Perfect' APIs Can Be Abused

Even those organizations that do bring a proactive focus to application security tend to put more emphasis on protecting APIs created for web and mobile applications. In these cases, many organizations often incorrectly assume that their web application firewalls (WAFs) will bear much of the load of securing this type of API usage. But the biggest API protection gap intended — even in sophisticated organizations — is protection of APIs that are open to partners. These APIs are ripe for abuse. Even if they are perfectly written and have no vulnerabilities, they can be abused in unanticipated ways to expose the core business functions and data of the organizations that share them. Perhaps the best example of this is the Cambridge Analytica (CA) scandal that rocked Facebook in 2018. As a brief refresher, CA exploited Facebook's open API to gather extensive data about at least 87 million users. This was accomplished by using a Facebook quiz app that exploited a permissive setting that allowed third-party apps to collect information about the quiz-taker, as well as all of their friends' interests, location data, and more.


Five cloud security risks your business needs to address

“Misconfigurations remain a top risk for cloud applications and data,” says Paul Bischoff, privacy advocate and editor at Comparitech, a website that rates technologies on their cybersecurity. A misconfiguration happens when an IT team inadvertently leaves the door open for hackers by, say, failing to change a default security setting. This is often down to human error and/or a misunderstanding of how a firm’s systems operate and interact. If misconfigurations happen on a non-cloud-connected network, they’re self-contained and, potentially, accessible only to those in the physical workplace. But, once your data is in the cloud, “it is subject to someone else’s security. You do not have any direct control or ability to test it,” notes Steven Furnell, professor of cybersecurity at the University of Nottingham. “This means trusting another party’s measures, so look for the appropriate assurances from them rather than making assumptions.” 


8 technology trends for innovative leaders in a post-pandemic world

Leaders today are faced with the task of taking difficult decisions that can have a profound impact on their workforce and employee wellbeing (although it’s not all grim) in a very uncertain environment. New risks have also emerged with the staggering amount of data created on the internet, such as cyber-attacks that are increasingly frequent and costly. What our Young Global Leaders know well is that it’s easy to lead when times are going well, but real responsibility emerges when you must stand up for what you believe in. Responsible leaders truly shine in times of crisis. With this in mind, we asked eight Young Global Leaders how they will leverage technology and innovate to become better leaders in 2022. New computational and AI tools are already being used by business leaders to guide strategic decision-making. In the next decade, this software will become more powerful and will be applied in new and different settings. Built upon the mathematics of game theory, AI tools harness the computational innovations that power chess engines.


As cloud costs spiral upward, enterprises turn to a thing called FinOps

Enter FinOps. This practice is intended to help organizations get maximum business value from cloud "by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions," according to the FinOps Foundation. (Yes, there's now even an entire foundation devoted to the practice.) In many cases, they are practicing the art of FinOps without even calling it that. Respondents are actively involved in the ongoing usage and cost management for both SaaS (69%) and public cloud IaaS and PaaS (66%). "More and more users are swimming in the FinOps side of the pool, even if they may not know it -- or call it FinOps yet," the Flexera survey's authors state. In addition, for the sixth year in a row, "optimizing the existing use of cloud is the top initiative for all respondents, underscoring the need for FinOps teams or similar ways to improve cost savings initiatives," they also note. While the survey doesn't explicitly ask about FinOps adoption, the authors also state that some organizations have organized FinOps teams to assist in evaluating cloud computing metrics and value.



Quote for the day:

"The art of leadership is saying no, not yes. It is very easy to say yes." -- Tony Blair

Daily Tech Digest - April 11, 2021

One-stop machine learning platform turns health care data into insights

To turn reams of data into useful predictions, Cardea walks users through a pipeline, with choices and safeguards at each step. They are first greeted by a data assembler, which ingests the information they provide. Cardea is built to work with Fast Healthcare Interoperability Resources (FHIR), the current industry standard for electronic health care records. Hospitals vary in exactly how they use FHIR, so Cardea has been built to "adapt to different conditions and different datasets seamlessly," says Veeramachaneni. If there are discrepancies within the data, Cardea's data auditor points them out, so that they can be fixed or dismissed. Next, Cardea asks the user what they want to find out. Perhaps they would like to estimate how long a patient might stay in the hospital. Even seemingly small questions like this one are crucial when it comes to day-to-day hospital operations — especially now, as health care facilities manage their resources during the Covid-19 pandemic, says Alnegheimish. Users can choose between different models, and the software system then uses the dataset and models to learn patterns from previous patients, and to predict what could happen in this case, helping stakeholders plan ahead.


8 Ways Digital Banking Will Evolve Over the Next 5 Years

The initial shift toward digital financial services saw an ad hoc response from regulators. As new technologies come into play and tech giants like Google and Apple become increasingly disruptive in the financial industry, these transformations will force policymakers to identify emerging threat vectors and comprehensively address risk. In contrast to today’s mostly national systems of oversight, a global approach may be necessary to ensure stability in the sector, and we may see the rise of new licensing and supervisory bodies. The future of digital banking appears bright, but the unprecedented pace of innovation and shifts in consumer expectations demand a new level of agility and forward-thinking. Even as financial institutions attempt to differentiate themselves from competitors, co-innovation will become an integral part of success. People and technology will both play critical roles in these developments. Tech capabilities and digital services must be extremely resilient, constantly available at the time of customer need. Human capital, however, will be as crucial as any other asset. Leaders will have to know how to upskill, reskill and retain their talent to promote innovation. 


A new era of innovation: Moore’s Law is not dead and AI is ready to explode

We sometimes use artificial intelligence and machine intelligence interchangeably. This notion comes from our collaborations with author David Moschella. Interestingly, in his book “Seeing Digital,” Moschella says “there’s nothing artificial” about this: There’s nothing artificial about machine intelligence just like there’s nothing artificial about the strength of a tractor. It’s a nuance, but precise language can often bring clarity. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get “smarter” – make better models, for example, that can lead to augmented intelligence and better decisions by humans, or machines. These models improve as they get more data and iterate over time. Deep learning is a more advanced type of machine learning that uses more complex math. The right side of the chart above shows the two broad elements of AI. The point we want to make here is that much of the activity in AI today is focused on building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years.


Rethinking Ecommerce as Commerce at Home

Ecommerce is all grown up. It’s time to break away from the early-internet paradigm where online shopping was a new, “electronic” form of shopping. Today, almost all commerce involves varying degrees of digital elements (discovery, price comparison, personalization, selection, ordering, payment, delivery, etc.). The defining factor is not whether commerce is digital; rather, one defining factor is the optimal location for a retailer to meet a consumer’s needs. Shopping happens on a spectrum between home and the store. As such, ecommerce is better understood as commerce at home, and Amazon was the early winner. Great retailers focus on convenience or the experiential. In the new paradigm, certain retail truths persist. For example, all great retailers have focused primarily on either convenience retail or experiential retail. To be clear, any retail can be a great experience, but the priority matters. Amazon focuses ruthlessly on convenience. The outcome is a great customer experience. To drive growth, Amazon has prioritized speed and selection over consultation and curation. Amazon’s focus on convenience has yielded an (incredibly) high-volume, low-margin retail business.


These are the AI risks we should be focusing on

AI may never reach the nightmare sci-fi scenarios of Skynet or the Terminator, but that doesn’t mean we can shy away from facing the real social risks today’s AI poses. By working with stakeholder groups, researchers and industry leaders can establish procedures for identifying and mitigating potential risks without overly hampering innovation. After all, AI itself is neither inherently good nor bad. There are many real potential benefits that it can unlock for society — we just need to be thoughtful and responsible in how we develop and deploy it. For example, we should strive for greater diversity within the data science and AI professions, including taking steps to consult with domain experts from relevant fields like social science and economics when developing certain technologies. The potential risks of AI extend beyond the purely technical; so too must the efforts to mitigate those risks. We must also collaborate to establish norms and shared practices around AI like GPT-3 and deepfake models, such as standardized impact assessments or external review periods.


India Inc. must consider Digital Ethics framework for responsible digitalisation

An accelerated pace of digital transition, consumption of goods and services via app-based interface, and proliferation of data bring numerous risks such as biased decision-making processes being transferred to machines or algorithms at the development stage by humans, a Deloitte statement said on Friday. "These biases can be a threat to the reputation and trust towards stakeholders, as well as cause operational risks," it said. Partner, Deloitte India, Vishal Jain, said the pandemic compelled businesses and consumers to embrace digital technologies like artificial intelligence, big data, cloud, IoT and more in a big way. "However, the need of the hour is to relook at the business operations layered on digital touchpoints with the lens of ethics, given biases might arise in the due course, owing to a faster response time to an issue," he said. Societal pressure to do "the right thing" now needs a careful consideration of the trade-offs involved in the responsible usage of technology, Jain said, adding, its interplay becomes vital to managing data privacy rights while actively adopting customer analytics for personalised service.


How to Be a Better Leader By Building a Better Tribe

All of our journeys are exquisitely different, yet come with a unique set of challenges that can blur our leadership lens if not properly focused. This can become a snowball of personal detriment. Therefore, your mental, physical, and emotional health is just as important (if not more) than your professional and economic health—they are interrelated. Identify a therapist, wellness clinician, spiritual leader, life coach, physical trainer and/or anyone who can support your becoming an even greater version of yourself. Let's call this person the "healer". Make time for physical activity, healthy food choices and spending time with loved ones. Ensure the same investment you make in your team members, you also make in yourself. It is up to you to create your rituals for personal success. What will they entail? ... Similarly to curating a list of your tribal elders, remember that you are also an elder to a younger leader in your collective. We all were afforded a different set of societal privileges based on constructs of race/ethnicity, gender, sexual orientation, cognitive and physical abilities, etc. I think it’s important to utilize some of these privileges to be an ally/co-conspirator to someone who may not have the same position in society.


What is an enterprise architect? Everything you need to know about the role

The role of EA is closely connected to solutions architect, but tends to be broader in outlook. While EAs focus on the enterprise-level design of the entire IT environment, solution architects find spot solutions to specific business problems. EAs also work closely with business analysts, who analyse organisational processes, think about how technology might help, and then make sure tech requirements are implemented successfully. Looking upwards, EAs tend to work very closely with chief information officers (CIOs). While the CIO focuses on understanding the wider business strategy, the EA works to ensure that the technology that the organisation buys will help it to meet its business goals, whether that's improvements in productivity, gains in operational efficiency or developing fresh customer experiences, while also working with others – like the security team – to ensure everything remains secure. Nationwide CIO Gary Delooze is a former EA who says a really good enterprise architect will bring the business and IT teams together to create a technology roadmap.


How Blockchain Can Simplify Partnerships

To appreciate the ways in which blockchains can support complex collaborations, consider the task of shipping perishable goods across borders — a feat that requires effective coordination among suppliers, buyers, carriers, customs, and inspectors, among others. When the parties pass the cargo to another, a flood of information is transferred with it. Each party keeps their own record and tends to communicate with one partner at a time, which often leads to inconsistent knowledge across participants, shipping delays, and even counterfeit documentations or products. If, say, the buyer expects the goods to be constantly cooled throughout the shipping process and temperatures exceed agreed thresholds, a dispute is likely to occur among the buyer, the supplier, and the carrier, which can devolve into lengthy wrangling. The carrier may haggle over the liability to lower the compensation, arguing that customs delaying the transportation or the inspectors who improperly operated with the cargo are the ones to blame. The buyer will ask the supplier for remedy, who in turn needs to negotiate with the carrier. And so on. Problems like these can manifest in any collaboration that requires cumbersome information sharing among partners and may involve disputes in the process. 


Practical Points from the DGPO: An Introduction to Information Risk Management

Individuals are starting to pay attention to organizational vulnerabilities that compound risks associated with managing, protecting, and enabling access to information, ranging from poor data quality, insufficient methods of protecting against data breaches, inability to auditably demonstrate compliance with numerous laws and regulations, in addition to customer concerns about ethical and responsible corporate use of personal data. And as organizations expand their data management footprints across an increasingly complex hybrid multicloud environments, there has never been a greater need for systemic information risk management. ... In general, “risk” affects the way that a business operates in a number of ways. At the most fundamental level, it inhibits quality excellence. However, exposure to risks not only has an effect on project objectives, but it also poses threats of quantifiable damage, injury, loss, liability, or other negative occurrence that may be avoided through preemptive action. Using the Wikipedia definition as a start, we can define information risk as “the potential for loss of value due to issues associated with managing information.”



Quote for the day:

"The actions of a responsible executive are contagious." -- Joe D. Batton