Daily Tech Digest - September 14, 2024

Three Critical Factors for a Successful Digital Transformation Strategy

Just as important as the front-end experience are the back-end operations that keep and build the customer relationship. Value-added digital services that deliver back-end operational excellence can improve the customer experience through better customer service, improved security and more. Emerging tech like artificial intelligence can substantially improve how companies get a clearer view into their operations and customer base. Take data flow and management, for example. Many executives report they are swimming in information, yet around half admit they struggle analyzing it, according to research by Paynearme. While data is important, the insights derived from that data are key to the conclusions executives must draw. Maintaining a digital record of customer information, transaction history, spend behaviors and other metrics and applying AI to analyze and inform decisions can help companies provide better service and protect their end users. They can streamline customer service, for instance, by immediately sourcing relevant information and delivering a resolution in near-real time, or by automating the analysis of spend behavior and location data to shut down potential fraudsters.


AI reshaping the management of remote workforce

In a remote work setting, one of the biggest challenges for organizations remains in streamlining of operations. For a scattered team, the implementation of AI emerges as a revolutionary tool in automating shift and rostering using historical pattern analytics. Historical data on staff availability, productivity, and work patterns enable organizations to optimise schedules and strike a perfect balance between operational needs and employee preferences. Subsequently, this reduces conflicts and enhances overall work efficiency. Apart from this, AI analyses staff work duration and shifts that further enable organizations to predict staffing needs and optimise resource allocation. This enhances capacity modelling to ensure the right team member is available to handle tasks during peak times, preventing overstaffing or understaffing issues. ... With expanding use cases, AI-powered facial recognition technology has become a critical part of identity verification and promoting security in remote work settings. Organisations need to ensure security and confidentiality at all stages of their work. In tandem, AI-powered facial recognition ensures that only authorized personnel have access to the company’s sensitive systems and data. 


The DPDP act: Navigating digital compliance under India’s new regulatory landscape

Adapting to the DPDPA will require tailored approaches, as different sectors face unique challenges based on their data handling practices, customer bases, and geographical scope. However, some fundamental strategies can help businesses effectively navigate this new regulatory landscape. First, conducting a comprehensive data audit is essential. Businesses need to understand what data they collect, where it is stored, and who has access to it. Mapping out data flows allows organizations to identify risks and address them proactively, laying the groundwork for robust compliance. Appointing a Data Protection Officer (DPO) is another critical step. The DPO will be responsible for overseeing compliance efforts, serving as the primary point of contact for regulatory bodies, and handling data subject requests. While it’s not yet established whether it’s mandatory or not, it is safe to say that this role is vital for embedding a culture of data privacy within the organisation. Technology can also play a significant role in ensuring compliance. Tools such as Unified Endpoint Management (UEM) solutions, encryption technologies, and data loss prevention (DLP) systems can help businesses monitor data flows, detect anomalies, and prevent unauthorized access. 


10 Things To Avoid in Domain-Driven Design (DDD)

To prevent potential issues, it is your responsibility to maintain a domain model that is uncomplicated and accurately reflects the domain. This diligent approach is important to focus on modeling the components of the domain that offer strategic importance and to streamline or exclude less critical elements. Remember, Domain-Driven Design (DDD) is primarily concerned with strategic design and not with needlessly complexifying the domain model with unnecessary intricacies. ... It's crucial to leverage Domain-Driven Design (DDD) to deeply analyze and concentrate on the domain's most vital and influential parts. Identify the aspects that deliver the highest value to the business and ensure that your modeling efforts are closely aligned with the business's overarching priorities and strategic objectives. Actively collaborating with key business stakeholders is essential to gain a comprehensive understanding of what holds the greatest value to them and subsequently prioritize these areas in your modeling endeavors. This approach will optimally reflect the business's critical needs and contribute to the successful realization of strategic goals.


How to Build a Data Governance Program in 90 Days

With a new data-friendly CIO at the helm, Hidalgo was able to assemble the right team for the job and, at the same time, create an environment of maximum engagement with data culture. She assembled discussion teams and even a data book club that read and reviewed the latest data governance literature. In turn, that team assembled its own data governance website as a platform not just for sharing ideas but also to spread the momentum. “We kept the juices flowing, kept the excitement,” Hidalgo recalled. “And then with our data governance office and steering committee, we engaged with all departments, we have people from HR, compliance, legal product, everywhere – to make sure that everyone is represented.” ... After choosing a technology platform in May, Hidalgo began the most arduous part of the process: preparation for a “jumpstart” campaign that would kick off in July. Hidalgo and her team began to catalog existing data one subset of data at a time – 20 KPIs or so – and complete its business glossary terms. Most importantly, Hidalgo had all along been building bridges between Shaw’s IT team, data governance crew, and business leadership to the degree that when the jumpstart was completed – on time – the entire business saw the immense value-add of the data governance that had been built.


Varied Cognitive Training Boosts Learning and Memory

The researchers observed that varied practice, not repetition, primed older adults to learn a new working memory task. Their findings, which appear in the journal Intelligence, propose diverse cognitive training as a promising whetstone for maintaining mental sharpness as we age. “People often think that the best way to get better at something is to simply practice it over and over again, but robust skill learning is actually supported by variation in practice,” said lead investigator Elizabeth A. L. Stine-Morrow ... The researchers narrowed their focus to working memory, or the cognitive ability to hold one thing in mind while doing something else. “We chose working memory because it is a core ability needed to engage with reality and construct knowledge,” Stine-Morrow said. “It underpins language comprehension, reasoning, problem-solving and many sorts of everyday cognition.” Because working memory often declines with aging, Stine-Morrow and her colleagues recruited 90 Champaign-Urbana locals aged 60-87. At the beginning and end of the study, researchers assessed the participants’ working memory by measuring each person’s reading span: their capacity to remember information while reading something unrelated.


Why Cloud Migrations Fail

One stumbling block on the cloud journey is misunderstanding or confusion around the shared responsibility model. This framework delineates the security obligations of cloud service providers, or CSPs, and customers. The model necessitates a clear understanding of end-user obligations and highlights the need for collaboration and diligence. Broad assumptions about the level of security oversight provided by the CSP can lead to security/data breaches that the U.S. National Security Agency (NSA) notes “likely occur more frequently than reported.” It’s also worth noting that 82% of breaches in 2023 involved cloud data. The confusion is often magnified in cases of a cloud “lift-and-shift,” a method where business-as-usual operations, architectures and practices are simply pushed into the cloud without adaptation to their new environment. In these cases, organizations may be slow to implement proper procedures, monitoring and personnel to match the security limitations of their new cloud environment. While the level of embedded security can differ depending on the selected cloud model, the customer must often enact strict security and identity and access management (IAM) controls to secure their environment.


AI - peril or promise?

The interplay between AI data centers and resource usage necessitates innovative approaches to mitigate environmental impacts. Advances in cooling technology, such as liquid immersion cooling and the use of recycled water, offer potential solutions. Furthermore, utilizing recycled or non-potable water for cooling can alleviate the pressure on freshwater resources. Moreover, AI itself can be leveraged to enhance the efficiency of data centers. AI algorithms can optimize energy use by predicting cooling needs, managing workloads more efficiently, and reducing idle times for servers. Predictive maintenance powered by AI can also prevent equipment failures, thereby reducing the need for excessive cooling. This is good news as the sector continues to use AI to benefit from greater efficiencies, cost savings, driving improvements in services with the expected impact of AI on the operational side for data centres expected to be very positive. Over 65 percent of our survey respondents reported that their organizations are regularly using generative AI, nearly double the percentage from their 2023 survey and around 90 percent of respondents expect their data centers to be more efficient as a direct result of AI applications.


HP Chief Architect Recalibrates Expectations Of Practical Quantum Computing’s Arrival From Generations To Within A Decade

Hewlett Packard Labs is now adopting a holistic co-design approach, partnering with other organizations developing various qubits and quantum software. The aim is to simulate quantum systems to solve real-world problems in solid-state physics, exotic condensed matter physics, quantum chemistry, and industrial applications. “What is it like to actually deliver the optimization we’ve been promised with quantum for quite some time, and achieve that on an industrial scale?” Bresniker posed. “That’s really what we’ve been devoting ourselves to—beginning to answer those questions of where and when quantum can make a real impact.” One of the initial challenges the team tackled was modeling benzine, an exotic chemical derived from the benzene ring. “When we initially tackled this problem with our co-design partners, the solution required 100 million qubits for 5,000 years—that’s a lot of time and qubits,” Bresniker told Frontier Enterprise. Considering current quantum capabilities are in the tens or hundreds of qubits, this was an impractical solution. By employing error correction codes and simulation methodologies, the team significantly reduced the computational requirements.


New AI reporting regulations

At its core, the new proposal requires developers and cloud service providers to fulfill reporting requirements aimed at ensuring the safety and cybersecurity resilience of AI technologies. This necessitates the disclosure of detailed information about AI models and the platforms on which they operate. One of the proposal’s key components is cybersecurity. Enterprises must now demonstrate robust security protocols and engage in what’s known as “red-teaming”—simulated attacks designed to identify and address vulnerabilities. This practice is rooted in longstanding cybersecurity practices, but it does introduce new layers of complexity and cost for cloud users. Based on the negative impact of red-teaming on enterprises, I suspect it may be challenged in the courts. The regulation does increase focus on security testing and compliance. The objective is to ensure that AI systems can withstand cyberthreats and protect data. However, this is not cheap. Achieving this result requires investments in advanced security tools and expertise, typically stretching budgets and resources. My “back of the napkin” calculations figure about 10% of the system’s total cost.



Quote for the day:

"Your greatest area of leadership often comes out of your greatest area of pain and weakness." -- Wayde Goodall

Daily Tech Digest - September 13, 2024

AI can change belief in conspiracy theories, study finds

“Our findings fundamentally challenge the view that evidence and arguments are of little use once someone has ‘gone down the rabbit hole’ and come to believe a conspiracy theory,” the team wrote. Crucially, the researchers said, the approach relies on an AI system that can draw on a vast array of information to produce conversations that encourage critical thinking and provide bespoke, fact-based counterarguments. ... “About one in four people who began the experiment believing a conspiracy theory came out the other end without that belief,” said Costello. “In most cases, the AI can only chip away – making people a bit more sceptical and uncertain – but a select few were disabused of their conspiracy entirely.” The researchers added that reducing belief in one conspiracy theory appeared to reduce participants’ belief in other such ideas, at least to a small degree, while the approach could have applications in the real world – for example, AI could reply to posts relating to conspiracy theories on social media. Prof Sander van der Linden of the University of Cambridge, who was not involved in the work, questioned whether people would engage with such AI voluntarily in the real world.


Does Value Stream Management Really Work?

Value stream management is indeed working when it is approached holistically by integrating the framework with technology and people. By mapping and optimizing every step in the customer journey, companies can eliminate waste, create efficiency and ultimately deliver sought after value to customers. The key lies in continuous improvement and stakeholder engagement throughout the value stream, ensuring alignment and commitment to delivering responsiveness and quality to customer needs, according to Saraha Burnett, chief operations officer at full service digital experience and engineering firm TMG. “Value stream management is indeed working when it is approached holistically by integrating the framework with technology and people. By mapping and optimizing every step in the customer journey, companies can eliminate waste, create efficiency and ultimately deliver sought after value to customers,” says Burnett in an email interview. “The key lies in continuous improvement and stakeholder engagement throughout the value stream, ensuring alignment and commitment to delivering responsiveness and quality to customer needs.”


Digital ID hackathons to explore real-world use cases

The hackathons aim to address the cold start program by involving verifiers to facilitate the widespread adoption of mDLs. In this context, the cold start program refers to a marketplace that relies on identity holders and verifiers. The primary focus of the hackathon will be on building minimum viable products (MVPs) that showcase the functionality of the solution. These MVPs will enable participants to test real-world use cases for mDLs. The digital version of California driver’s licenses has a variety of potential uses, according to the OpenID Foundation, including facilitating TSA security checks at airport security checkpoints, verifying age for purchasing age-restricted items, accessing DMV websites online, and using for peer-to-peer identification purposes. For the hackathon, the California DMV will issue mDLs in two formats: the ISO 18013-5 standard and the W3C Verifiable Credentials v1.1 specification. The dual issuance provides verifiers with the flexibility to choose the verification method that best aligns with their system requirements, the foundation says. Christopher Goh, the national harmonization lead for digital identity at Austroads, has written a one-pager discussing the various standards within the ISO/IEC 180130-5 framework specifically related to mDL.


Microsoft VS Code Undermined in Asian Spy Attack

"While the abuse of VSCode is concerning, in our opinion, it is not a vulnerability," Assaf Dahan, director of threat research for Unit 42, clarifies. Instead, he says, "It's a legitimate feature that was abused by threat actors, as often happens with many legitimate software." And there are a number of ways organizations can protect against a bring-your-own-VSCode attack. Besides hunting for indicators of compromise (IoCs), he says, "It's also important to consider whether the organization would want to limit or block the use of VSCode on endpoints of employees that are not developers or do not require the use of this specific app. That can reduce the attack surface." "Lastly, consider limiting access to the VSCode tunnel domains '.tunnels.api.visualstudio[.]com' or '.devtunnels[.]ms' to users with a valid business requirement. Notice that these domains are legitimate and are not malicious, but limiting access to them will prevent the feature from working properly and consequently make it less attractive for threat actors," he adds.


Rather Than Managing Your Time, Consider Managing Your Energy

“Achievement is no longer enough to be successful,” Sunderland says. “People also want to feel happy at the same time. Before, people were concerned only with thinking (mental energy) and doing (physical energy). But that success formula no longer works. Today, it’s essential to add feelings (emotional energy) and inner self-experience (spiritual energy) into the mix for people to learn how to be able to connect to and manage their energy.” ... Sunderland says all forms of human energy exist in relation to one another. “When these energies are in sync with each other, people’s energy will be in flow. People who maintain good health will be able to track those feelings (emotional energy) that flow through their bodies (physical energy), which is an essential skill to help increase energy awareness. With greater levels of energy awareness, people can grow their self-acceptance (emotional energy), which enhances their self-confidence.” He says that as confidence builds, people experience greater clarity of thought (mental energy) and they are able to increase their ability to speak truth (spiritual energy), amplifying their creative energy. 


Mastercard Enhances Real-Time Threat Visibility With Recorded Future Purchase

The payments network has made billions of dollars worth of acquisitions through the years. Within the security solutions segment of Mastercard, key focal points center on examining and protecting digital identities, protecting transactions and using insights from 143 billion annual payments to fashion real-time intelligence that can be used by merchants and FIs to anticipate new threats. By way of example, the firm acquired Ekarta in 2021 to score transactions for the likelihood of fraud through robust identity verification. All told, Mastercard has invested more than $7 billion over the past five years in its efforts to protect the digital economy. Artificial intelligence (AI) is a key ingredient here, and Gerber detailed to PYMNTS that the company has been a pioneer in harnessing generative AI to extract trends from huge swaths of data to create “identity graphs” that provide immediate value to any merchant or FI that wants to understand more about the individuals that’s interacting with them in the digital realm. The use of other “intelligence graphs” connects the dots across data points to turn threat-related data into actionable insights.


2 Open Source AI Tools That Reduce DevOps Friction

DevOps has been built upon taking everything infrastructure and transitioning it to code, aka Infrastructure as Code (IaC). This includes deployment pipelines, monitoring, repositories — anything that is built upon configurations can be represented in code. This is where AI tools like ChatGPT and AIaC come into play. AIaC, an open source command-line interface (CLI) tool, enables developers to generate IaC templates, shell scripts and more, directly from the terminal using natural language prompts. This eliminates the need to manually write and review code, making the process faster and less error-prone. ... The use of AI in DevOps is still in its early stages, but it’s quickly gaining momentum with the introduction of new open source and commercial services. The rapid pace of innovation suggests that AI will soon be embedded in most DevOps tools. From automated code generation with AIaC to advanced diagnostics with K8sGPT, the possibilities seem endless. Firefly is not just observing this revolution — it’s actively contributing to it. By integrating AI into DevOps workflows, teams can work smarter, not harder. 


How to make Infrastructure as Code secure by default

Scanning IaC templates before deployment is undeniably important; it’s an effective way to identify potential security issues early in the development process. It can help prevent security breaches and ensure that your cloud infrastructure aligns with security best practices. If you have IaC scanning tools integrated into your CI/CD pipelines, you can also run automated scans with each code commit or pull request, catching errors early. Post-deployment scans are important because they assess the infrastructure in its operational environment, which may result in finding issues that weren’t identified in dev and test environments. These scans may also identify unexpected dependencies or conflicts between resources. Any manual fixes you make to address these problems will also require you to update your existing IaC templates, otherwise any apps using those templates will be deployed with the same issues baked in. And while identifying these issues in production environments is important to overall security, it can also increase your costs and require your team to apply manual fixes to both the application and the IaC.


New brain-on-a-chip platform to deliver 460x efficiency boost for AI tasks

Despite its novel approach, IISc’s platform is designed to work alongside existing AI hardware, rather than replace it. Neuromorphic accelerators like the one developed by IISc are particularly well-suited for offloading tasks that involve repetitive matrix multiplication — a common operation in AI. “GPUs and TPUs, which are digital, are great for certain tasks, but our platform can take over when it comes to matrix multiplication. This allows for a major speed boost,” explained Goswami. ... As the demand for more advanced AI models increases, existing digital systems are nearing their energy and performance limits. Silicon-based processors, which have driven AI advancements for years, are starting to show diminishing returns in terms of speed and efficiency. “With silicon electronics reaching saturation, designing brain-inspired accelerators that can work alongside silicon chips to deliver faster, more efficient AI is becoming crucial,” Goswami noted. By working with molecular films and analog computing, IISc is offering a new path forward for AI hardware, one that could dramatically cut energy consumption while boosting computational power.


Android Trojans Still Pose a Threat, Researchers Warn

Affected users appear to have been tricked into installing the malware, which doesn't appear to be getting distributed via official Google channels. "Based on our current detections, no apps containing this malware are found on Google Play," a Google spokesperson told Information Security Media Group.* "Android users are automatically protected against known versions of this malware by Google Play Protect, which is on by default on Android devices with Google Play Services," the spokesperson said. "Google Play Protect can warn users or block apps known to exhibit malicious behavior, even when those apps come from sources outside of Play."* Researchers said they first spotted the malware when it was uploaded to analysis site VirusTotal in May from Uzbekistan, in the form of a malicious app made to appear as if it was developed by a "local tax authority." By tracing the IP address to which the malware attempted to "phone home" the researchers found other .apk - Android package - files that showed similar behavior, which they traced to attacks that began by November 2023.



Quote for the day:

"Sometimes it takes a good fall to really know where you stand." -- Hayley Williams

Daily Tech Digest - September 12, 2024

Navigating the digital economy: Innovation, risk, and opportunity

As we move towards the era of Industry 5.0, Digital Economy needs to adopt Human Centred Design (HCD) approach where technology layers revolve around the Human’s as the core. By 2030, it is envisaged to have Organoid Intelligence (OI) to rule the digital economy space with its potential across multi-disciplines with Super Intelligent capabilities. This capability shall democratize digital economy services across sectors in a seamless manner. This rapid technology adoption exposes the system to cyber risks which calls for advanced future security solutions such as Quantum Security embedded with digital currencies such as e-Rupee, crypto-currency, etc. ‘e-rupee’, a virtual equivalent of cash stored in a digital wallet, offers anonymity in payments. ... Indian banks are already piloting blockchain for issuing Letters of Credit, and integrating UPI with blockchain could combine the strengths of both systems, ensuring greater security, ease of use, and instant transactions. Such cyber security threats, also create opportunity for Bit-coin or Crypto-currencies to expand from its current offering towards sectors such as gaming, etc. 


From DevOps to Platform Engineering: Powering Business Success

Platform engineering provides a solution with the tools and frameworks needed to scale software delivery processes, ensuring that organizations can handle increasing workloads without sacrificing quality or speed. It also leads to improved consistency and reliability. By standardizing workflows and automating processes, platform engineering reduces the variability and risk associated with manual interventions. This leads to more consistent and reliable deployments, enhancing the overall stability of applications in production. Further productivity comes from the efficiency it offers developers themselves. Developers are most productive when they can focus on writing code and solving business problems. Platform engineering removes the friction associated with provisioning resources, managing environments, and handling operational tasks, allowing developers to concentrate on what they do best. It also provides the infrastructure and tools needed to experiment, iterate, and deploy new features rapidly, enabling organizations to stay ahead of the curve.


Scaling Databases To Meet Enterprise GenAI Demands

A hybrid approach combines vertical and horizontal scalability, providing flexibility and maximizing resource utilization. Organizations can begin with vertical scaling to enhance the performance of individual nodes and then transition to horizontal scaling as data volumes and processing demands increase. This strategy allows businesses to leverage their existing infrastructure while preparing for future growth — for example, initially upgrading servers to improve performance and then distributing the database across multiple nodes as the application scales. ... Data partitioning and sharding involve dividing large datasets into smaller, more manageable pieces distributed across multiple servers. This approach is particularly beneficial for vector databases, where partitioning data improves query performance and reduces the load on individual nodes. Sharding allows a vector database to handle large-scale data more efficiently by distributing the data across different nodes based on a predefined shard key. This ensures that each node only processes a subset of the data, optimizing performance and scalability.


Safeguarding Expanding Networks: The Role of NDR in Cybersecurity

NDR plays a crucial role in risk management by continuously monitoring the network for any unusual activities or anomalies. This real-time detection allows security teams to catch potential breaches early, often before they can cause serious damage. By tracking lateral movements within the network, NDR helps to contain threats, preventing them from spreading. Plus, it offers deep insights into how an attack occurred, making it easier to respond effectively and reduce the impact. ... When it comes to NDR, key stakeholders who benefit from its implementation include Security Operations Centre (SOC) teams, IT security leaders, and executives responsible for risk management. SOC teams gain comprehensive visibility into network traffic, which reduces false positives and allows them to focus on real threats, ultimately lowering stress and improving their efficiency. IT security leaders benefit from a more robust defence mechanism that ensures complete network coverage, especially in hybrid environments where both managed and unmanaged devices need protection.


Application detection and response is the gap-bridging technology we need

In the shared-responsibility model, not only is there the underlying cloud service provider (CSP) to consider, but there are external SaaS integrations and internal development and platform teams, as well as autonomous teams across the organization often leading to opaque systems with a lack of clarity around where responsibilities begin and end. On top of that, there are considerations around third-party dependencies, components, and vulnerabilities to address. Taking that further, the modern distributed nature of systems creates more opportunities for exploitation and abuse. One example is modern authentication and identity providers, each of which is a potential attack vector over which you have limited visibility due to not owning the underlying infrastructure and logging. Finally, there’s the reality that we’re dealing with an ever-increasing velocity of change. As the industry continues further adoption of DevOps and automation, software delivery cycles continue to accelerate. That trend is only likely to increase with the use of genAI-driven copilots. 


Data Is King. It Is Also Often Unlicensed or Faulty

A report published in the Nature Machine Intelligence journal presents a large-scale audit of dataset licensing and attribution in AI, analyzing over 1,800 datasets used in training AI models on platforms such as Hugging Face. The study revealed widespread miscategorization, with over 70% of datasets omitting licensing information and over 50% containing errors. In 66% of the cases, the licensing category was more permissive than intended by the authors. The report cautions against a "crisis in misattribution and informed use of popular datasets" that is driving recent AI breakthroughs but also raising serious legal risks. "Data that includes private information should be used with care because it is possible that this information will be reproduced in a model output," said Robert Mahari, co-author of the report and JD-PhD at MIT and Harvard Law School. In the vast ocean of data, licensing defines the legal boundaries of how data can be used. ... "The rise in restrictive data licensing has already caused legal battles and will continue to plague AI development with uncertainty," said Shayne Longpre, co-author of the report and research Ph.D. candidate at MIT. 


AI interest is driving mainframe modernization projects

AI and generative AI promise to transform the mainframe environment by delivering insights into complex unstructured data, augmenting human action with advances in speed, efficiency and error reduction, while helping to understand and modernize existing applications. Generative AI also has the potential to illuminate the inner workings of monolithic applications, Kyndryl stated. “Enterprises clearly see the potential with 86% of respondents confirmed they are deploying, or planning to deploy, generative AI tools and applications in their mainframe environments, while 71% say that they are already implementing generative AI-driven insights as part of their mainframe modernization strategy,” Kyndryl stated. ... While AI will likely shape the future for mainframes, a familiar subject remains a key driver for mainframe investments: security. “Given the ongoing threat from cyberattacks, increasing regulatory pressures, and an uptick in exposure to IT risk, security remains a key focus for respondents this year with almost half (49%) of the survey respondents cited security as the number one driver of their mainframe modernization investments in the year ahead,” Kyndryl stated.


How AI Is Propelling Data Visualization Techniques

AI has improved data processing and cleaning. AI identifies missing data and inconsistencies, which means we end up with more reliable datasets for effective visualization. Personalization is yet another benefit AI has brought. AI-powered tools can tailor visualizations based on set goals, context, and preferences. For example, a user can provide their business requirements, and AI will provide a customized chart and information layout based on these requirements. This saves time and can also be helpful when creativity isn’t flowing as well as we’d like. ... It’s useful for geographic data visualization in particular. While traditional maps provide a top-down perspective, AR mapping systems use existing mapping technologies, such as GPS, satellite images, and 3D models, and combine them with real-time data. For example, Google’s Lens in Maps feature uses AI and AR to help users navigate their surroundings by lifting their phones and getting instant feedback about the nearest points of interest. Business users will appreciate how AI automates insights with natural language generation (NGL). 


Framing The Role Of The Board Around Cybersecurity Is No Longer About Risk

Having set an unequivocal level of accountability with one executive for cybersecurity, the Board may want to revisit the history of the firm with regards to cyber protection, to ensure that mistakes are not repeated, that funding is sufficient and overall, that the right timeframes are set and respected, in particular over the mid to long-term horizon if large scale transformative efforts are required around cybersecurity. We start to see a list of topics emerging, broadly matching my earlier pieces, around the “key questions the Board should ask”, but more than ever, executive accountability is key in the face of current threats to start building up a meaningful and powerful top-down dialogue around cybersecurity. Readers may notice that I have not used the word “risk” even once in this article. Ultimately, risk is about things that may or may not happen: In the face of the “when-not-if” paradigm around cyber threats – and increasingly other threats as well – it is essential for the Board to frame and own business protection as a topic rooted in the reality of the world we live in, not some hypothetical matter which could be somehow mitigated, transferred or accepted.


Embracing First-Party Data in a Cookie-Alternative World

Unfortunately, the transition away from third-party cookies presents significant challenges that extend beyond shifting customer interactions. Many businesses are particularly concerned about the implications for data security and privacy. When looking into alternative data sources, businesses may inadvertently expose themselves to increased security risks. The shift to first-party data collection methods requires careful evaluation and implementation of advanced security measures to protect against data breaches and fraud. It is also crucial to ensure the transition is secure and compliant with evolving data privacy regulations. To ensure the data is secure, businesses should go beyond standard encryption practices and adopt advanced security measures such as tokenization for sensitive data fields, which minimizes the risk of exposing real data in the event of a breach. Additionally, regular security audits are crucial. Organizations should leverage automated tools for continuous security monitoring and compliance checks that can provide real-time alerts on suspicious activities, helping to preempt potential security incidents. 



Quote for the day:

“It's not about who wants you. It's about who values and respects you.” -- Unknown

Daily Tech Digest - September 11, 2024

Unlocking the Quantum Internet: Germany’s Latest Experiment Sets Global Benchmarks

“Comparative analysis with existing QKD systems involving SPS reveals that the SKR achieved in this work goes beyond all current SPS-based implementations. Even without further optimization of the source and setup performance, it approaches the levels attained by established decoy state QKD protocols based on weak coherent pulses.” The first author of the work, Dr. Jingzhong Yang remarked. The researchers speculate that QDs also offer great prospects for the realization of other quantum internet applications, such as quantum repeaters, and distributed quantum sensing, as they allow for inherent storage of quantum information and can emit photonic cluster states. The outcome of this work underscores the viability of seamlessly integrating semiconductor single-photon sources into realistic, large-scale, and high-capacity quantum communication networks. The need for secure communication is as old as humanity itself. Quantum communication uses the quantum characteristics of light to ensure that messages cannot be intercepted. “Quantum dot devices emit single photons, which we control and send to Braunschweig for measurement. This process is fundamental to quantum key distribution,” Ding said.


How AI Impacts Sustainability Opportunities and Risks

While AI can be applied to sustainability challenges, there are also questions around the sustainability of AI itself given technology’s impact on the environment. “We know that many companies are already dealing with the ramifications of increased energy usage and water usage as they're building out their AI models,” says Shim. ... As the AI market goes through its growing pains, chips are likely to become more efficient and use cases for the technology will become more targeted. But predicting the timeline for that potential future or simply waiting for it to happen is not the answer for enterprises that want to manage opportunities and risks around AI and sustainability now. Rather than getting caught up in “paralysis by analysis,” enterprise leaders can take action today that will help to actually build a more sustainable future for AI. With AI having both positive and negative impacts on the environment, enterprise leaders who wield it with targeted purpose are more likely to guide their organizations to sustainable outcomes. Throwing AI at every possible use case and seeing what sticks is more likely to tip the scales toward a net negative environmental impact. 


Agentic AI: A deep dive into the future of automation

Agentic AI combines classical automation with the power of modern large language models (LLMs), using the latter to simulate human decision-making, analysis and creative content. The idea of automated systems that can act is not new, and even a classical thermostat that can turn the heat and AC on and off when it gets too cold or hot is a simple kind of “smart” automation. In the modern era, IT automation has been revolutionized by self-monitoring, self-healing and auto-scaling technologies like Docker, Kubernetes and Terraform which encapsulate the principles of cybernetic self-regulation, a kind of agentic intelligence. These systems vastly simplify the work of IT operations, allowing an operator to declare (in code) the desired end-state of a system and then automatically align reality with desire—rather than the operator having to perform a long sequence of commands to make changes and check results. However powerful, this kind of classical automation still requires expert engineers to configure and operate the tools using code. Engineers must foresee possible situations and write scripts to capture logic and API calls that would be required. 


How to Make Technical Debt Your Friend

When a team identifies that they are incurring technical debt, they are basing that assessment on their theoretical ideal for the architecture of the system, but that ideal is just their belief based on assumptions that the system will be successful. The MVP may be successful, but in most cases its success is only partial - that is the whole point of releasing MVPs: to learn things that can be understood in no other way. As a result, assumptions about the MVA that the team needs to build also tend to be at least partially wrong. The team may think that they need to scale to a large number of users or support large volumes of data, but if the MVP is not overwhelmingly appealing to customers, these needs may be a long way off, if they are needed at all. For example, the team may decide to use synchronous communications between components to rapidly deliver an MVP, knowing that an asynchronous model would offer better scalability. However, the switch between synchronous and asynchronous models may never be necessary since scalability may not turn out to be an issue.


What CIOs should consider before pursuing CEO ambitions

The trend is encouraging, but it’s important to temper expectations. While CIOs have stepped up and delivered digital strategies for business transformation, using those successes as a platform to move into a CEO position could throw a curveball. Jon Grainger, CTO at legal firm DWF, says one key challenge is industrial constraints. “You’ve got to remember that, in a sector like professional services, there are things you’re going to be famous for,” he says. “DWF is famous for providing amazing legal services. And to do that, the bottom line is you’ve got to be a lawyer — and that’s not been my path.” He says CIOs can become CEOs, but only in the right environment. “If the question was rephrased to, ‘Jon, could you see yourself as a CEO?,’ then I would say, ‘Yes, absolutely.’ But I would say I’m unlikely to become the CEO of a legal services company because, ultimately, you’ve got to have the right skill set.” Another challenge is the scale of the transition. Compared to the longevity of other C-suite positions, technology leadership is an executive fledgling. Many CIOs — and their digital leadership peers, such as chief data or digital officers — are focused squarely on asserting their role in the business.


Immediate threats or long-term security? Deciding where to focus is the modern CISO’s dilemma

CISOs need to balance their budgets between immediate threat responses and long-term investments in cybersecurity infrastructure, says Eric O’Neill, national security strategist at NeXasure and a former FBI operative who helped capture former FBI special agent Robert Hanssen, the most notorious spy in US history. While immediate threats require attention, CISOs should allocate part of their budgets to long-term planning measures, such as implementing multi-factor authentication and phased infrastructure upgrades, he says. “This balance often involves hiring incident response partners on retainer to handle breaches, thereby allowing internal teams to focus on prevention and detection,” O’Neill says. “By planning phased rollouts for larger projects, CISOs can spread costs over time while still addressing immediate vulnerabilities.” Clare Mohr, US cyber intelligence lead at Deloitte, says a common approach is for CISOs to allocate 60 to 70% of their budgets to immediate threat response and the remainder to long-term initiatives –although this varies from company to company. “This distribution should be flexible and reviewed annually based on evolving threats,” she says. 


Would you let an AI robot handle 90% of your meetings?

“Let’s assume, fast-forward five or six years, that AI is ready. AI probably can help for maybe 90 per cent of the work,” he said. “You do not need to spend so much time [in meetings]. You do not have to have five or six Zoom calls every day. You can leverage the AI to do that.” Even more interestingly, Yuan alluded to your digital clone potentially being programmed to be better equipped to deal with areas you don’t feel confident in, for example, negotiating a deal during a sales call. “Sometimes I know I’m not good at negotiations. Sometimes I don’t join a sales call with customers,” he explained. “I know my weakness before sending a digital version of myself. I know that weakness. I can modify the parameter a little bit.” ... According to Microsoft’s 2024 Work Trend Index, 75 per cent of knowledge workers use AI at work every day. This is despite 46 per cent of those users not using it less than six months ago. ... However, leaders are lagging behind when it comes to incorporating AI productivity tools – 59 per cent worry about quantifying the productivity gains of AI and as a result, 78 per cent of AI users are bringing their own AI tools to work and 52 per cent who use AI at work are reluctant to admit to it for fear it makes them look replaceable.


Understanding the Importance of Data Resilience

Understanding an organization’s current level of data resilience is crucial for identifying areas that need improvement. Key indicators of data resilience include the Recovery Point Objective (RPO), which refers to the maximum acceptable amount of data loss measured in time. A lower RPO signifies a higher level of data resilience, as it minimizes the amount of data at risk during an incident. The Recovery Time Objective (RTO) is the target time for recovering IT and business activities after a disruption. A shorter RTO indicates a more resilient data strategy, as it enables quicker restoration of operations. Data integrity involves maintaining the accuracy and consistency of data over its lifecycle, implementing measures to prevent data corruption, unauthorized access, and accidental deletions. System redundancy, which includes having multiple data centers, failover systems, and cloud-based backups, ensures continuous data availability by providing redundant systems and infrastructure. Building sustainable data resilience requires a long-term commitment to continuous improvement and adaptation. 


Examining Capabilities-Driven AI

Organizations often respond to trends in technology by developing centralized organizations to adopt the underlying technologies associated with a trend. The industry has decades of experience demonstrating that centralized approaches to adopting technology result in large, centralized cost pools that generate little business value. Since the past is often a good predictor of the future, we expect that many companies will attempt to adopt AI by creating centralized organizations or “centers of excellence,” only to burn millions of dollars without generating significant business value. AI-enablement is much easier to accomplish within a capability than across an entire organization. Organizations can evaluate areas of weakness within a business capability, identify ways to either improve the customer experience and/or reduce the cost to serve, and target improvement levels. Once the improvement is quantified into an economic value, this value can be used to bound the build and operate cost of AI-enhanced capability. Benefit and cost parameters are important because knowledge engineering is often the largest cost associated with an AI-enabled business process. 


SOAR Is Dead, Long Live SOAR

While the core use case for SOAR remains strong, the combination of artificial intelligence, automation, and the current plethora of cybersecurity products will result in a platform that could take market share from SOAR systems, such as an AI-enabled next-generation SIEM, says Eric Parizo, managing principal analyst at Omdia. "SOC decision-makers are [not] going out looking to purchase orchestration and automation as much as they're looking to solve the problem of fostering a faster, more efficient TDIR [threat detection, investigation, and response] life cycle with better, more consistent outcomes," he says. "The orchestration and automation capabilities within standalone SOAR solutions are intended to facilitate those business objectives." AI and machine learning will continue to increasingly augment automation, says Sumo Logic's Clawson. While creating AI security agents that process data and automatically respond to threats is still in its infancy, the industry is clearly moving in that direction, especially as more infrastructure uses an "as-code" approach, such as infrastructure-as-code, he says. The result could be an approach that reduces the need for SOAR.



Quote for the day:

"Kind words do not cost much. Yet they accomplish much." -- Blaise Pascal

Daily Tech Digest - September 10, 2024

Will genAI kill the help desk and other IT jobs?

AI is transforming cybersecurity by automating threat detection, anomaly detection, and incident response. “AI-powered tools can quickly identify unusual behavior, analyze security pattern, scan for vulnerabilities, and even predict cyberattacks, making manual monitoring less necessary,” Foote said. “Security professionals will focus more on developing AI models that can defend against complex threats, especially as cybercriminals begin using AI to attack systems. There will be a demand for experts in AI ethics in cybersecurity, ensuring that AI systems used in security aren’t biased or misused.” IT support and systems administration positions — especially tier-one and tier-two help desk jobs — are expected to be hit particularly hard with job losses. Those jobs entail basic IT problem resolution and service desk delivery, as well as more in-depth technical support, such as software updates, which can be automated through AI today. The help desk jobs that remain would involve more hands-on skills that cannot be resolved by a phone call or electronic message.  ... Data scientists and analysts, on the other hand, will be in greater demand with AI, but their tasks will shift towards more strategic areas like interpreting AI-generated insights, ensuring ethical use of AI


Just-in-Time Access: Key Benefits for Cloud Platforms

Identity and access management (IAM) is a critical component of cloud security, and organizations are finding it challenging to implement it effectively. As businesses increasingly rely on multiple cloud environments, they face the daunting task of managing user identities across all their cloud systems. This requires an IAM solution that can support multiple cloud environments and provide a single source of truth for identity information. One of the most pressing challenges is the management of identities for non-human entities such as applications, services and APIs. IAM solutions must be capable of managing these identities, providing visibility, controlling access and enforcing security policies for non-human entities. ... Just-in-time (JIT) access is a fundamental security practice that addresses many of the challenges associated with traditional access management approaches. It involves granting access privileges to users for limited periods on an as-needed basis. This approach helps minimize the risk of standing privileges, which can be exploited by malicious actors. The concept of JIT access aligns with the principle of least privilege, which is essential for maintaining a robust security posture. 


Maximize Cloud Efficiency: What Is Object Storage?

Although object storage has existed in one form or another for quite some time, its popularity has surged with the growth of cloud computing. Cloud providers have made object storage more accessible and widespread. Cloud storage platforms generally favor object storage because it allows limitless capacity and scalability. Furthermore, object storage usually gets accessed via a RESTful API instead of conventional storage protocols like Server Message Blocks (SMB). This RESTful API access makes object storage easy to integrate with web-based applications. ... Object storage is typically best suited for situations where you need to store large amounts of data, especially when you need to store that data in the cloud. In cloud environments, block storage often stores virtual machines. File storage is commonly employed as a part of a managed solution, replacing legacy file servers. Of course, these are just examples of standard use cases. There are numerous other uses for each type of storage. ... Object storage is well-suited for large datasets, typically offering a significantly lower cost per gigabyte (GB). Having said that, many cloud providers sell various object storage tiers, each with its own price and performance characteristics. 


How human-led threat hunting complements automation in detecting cyber threats

IoBs are patterns that suggest malicious intent, even when traditional IoCs aren’t present. These might include unusual access patterns or subtle deviations from normal procedures that automated systems might miss due to the nature of rule-based detection. Human threat hunters excel at recognizing these anomalies through intuition, experience, and context. The combination of automation and human-led threat hunting ensures that all bases are covered. Automation handles the heavy lifting of data processing and detection of known threats, while human intelligence focuses on the subtle, complex, and context-dependent signals that often precede major security incidents. Together, they create a layered defense strategy that is comprehensive and adaptable. ... Skilled threat hunters are essential to a successful cybersecurity team. Their experience and deep understanding of adversarial tactics help to identify and respond to threats that would otherwise go unnoticed. Their intuition and ability to adapt quickly to new information also make them invaluable, especially when dealing with advanced persistent threats. However, the demand for skilled threat hunters far exceeds the supply. 


A critical juncture for public cloud providers

Enterprises are no longer limited to a single provider and can strategically distribute their operations to optimize costs and performance. This multicloud mastery reduces dependency on any specific vendor and emphasizes cloud providers’ need to offer competitive pricing alongside robust service offerings. There is something very wrong with how cloud providers are addressing their primary market. ... As enterprises explore their options, the appeal of on-premises solutions and smaller cloud providers becomes increasingly apparent. These alternatives, which I’ve been calling microclouds, often present customized services and transparent pricing models that align more closely with economic objectives. Indeed, with the surge of interest in AI, enterprises are turning to these smaller providers for GPUs and storage capabilities tailored to the AI systems they want to develop. They are often much less pricy, and many consider them more accessible than the public cloud behemoths roaming the land these days. Of course, Big Cloud quickly points out that it has thousands of services on its platform and is a one-stop shop for most IT needs, including AI. This is undoubtedly the case, and many entrepreneurs leverage public cloud providers for just those reasons. 


Two Letters, Four Principles: Mercedes Way of Responsible AI

In the past, intelligent systems were repeatedly the target of criticism. Such examples included chatbots using offensive language and discriminating facial recognition algorithms. These cases show that the use of AI requires some clear guidelines. "We adhere to stringent data principles, maintain a clear data vision and have a governance board that integrates our IT, engineering and sustainability efforts," said Renata Jungo Brüngger, member of the board of management for integrity, governance and sustainability at Mercedes-Benz Group, said during the company’s recent India sustainability dialogue 2024. AI is being applied to optimize supply chains, predictive vehicle maintenance and personalize customer interactions. Each of these use cases is developed with a strong focus on ethical considerations, ensuring that AI systems operate within a framework of privacy, fairness and transparency. ... "Data governance is the backbone of our AI strategy," said Umang Dharmik, senior vice president at Mercedes-Benz R&D (India). "There are stringent data governance frameworks to ensure responsible data management throughout its life cycle. This not only ensures compliance with global regulations but also fosters trust with our customers and stakeholders."


The Software Development Trends Challenging Security Teams

With the intense pace of development, chasing down each and every vulnerability becomes unfeasible – it is therefore not surprising to see prioritizing remediation top the list of challenges. Security teams can't afford to spend time, money, and effort fixing something that doesn't actually represent real risk to the organization. What's missing is contextual prioritization of the overall development environment in order to select which vulnerabilities to fix first based on the impact to the business. Security teams should aim to shift the focus to overall product security rather than creating silos for cloud security, application security, and other components of the software supply chain. ... Infrastructure as code use is exploding as developers look for ways to move faster. With IaC, developers can provision their own infrastructure without waiting for IT or operations. However, with increased use comes increased chance of misconfigurations. In fact, 67% of survey respondents noted that they are experiencing an increase in IaC template misconfigurations. These misconfigurations are especially dangerous because one flaw can proliferate easily and widely.


One of the best ways to get value for AI coding tools: generating tests

In our conversations with programmers, a theme that emerged is that many coders see testing as work they HAVE to do, not work they WANT to do. Testing is a best practice that results in a better final outcome, but it isn’t much fun. It’s like taking the time to review your answers after finishing a math test early: crucial for catching mistakes, but not really how you want to spend your free time. For more than a decade, folks have been debating the value of tests on our sites. ... The dislike some developers have for writing tests is a feature, not a bug, for startups working on AI-powered testing tools. CodiumAI is a startup which has made testing the centerpiece of its AI-powered coding tools. “Our vision and our focus is around helping verify code intent,” says Itamar Friedman. He acknowledges that many devs see testing as a chore. “I think many developers do not tend to add tests too much during coding because they hate it, or they actually do it because they think it's important, but they still hate it or find it as a tedious task.” The company offers an IDE extension, Codiumate, that acts as a pair programmer while you work: “We try to automatically raise edge cases or happy paths and challenge you with a failing test and explain why it might actually fail.”


Quantum Safe Encryption is Next Frontier in Protecting Sensitive Data

In the digital world that we live in today, cryptographic encryption and authentication are the de rigour techniques employed to secure data, communications, access to systems as well as digital interactions. Public-key cryptography is a widely prevalent technique used to secure digital infrastructure. Codes and keys used for encryption and authentication in these schemes are specific mathematical problems such as prime factorization that classical computers cannot solve in a reasonable time. ... As standards for the quantum era have been introduced, PQC-based solutions are being introduced in the market. Governments and organizations across the spectrum must move quickly to enhance their cyber resilience to tackle the challenges of the quantum era. The imperative is not only to prepare for an era of readily available powerful quantum computers that could attack incumbent systems but also devise mechanisms to deal with the imminent possibility of decryption of data secured by classical encryption techniques. This could be the existing encrypted data or those that were stolen prior to the availability of quantum-safe encryption standards and hoarded in anticipation of the availability of quantum computer assisted tools to crack them.


Want to get ahead? Four activities that can enable a more proactive security regime

As Goerlich notes, CISOs who want a more proactive program need to be looking into the future. To ensure he has time to do that, Goerlich schedules regular off-site meetings every quarter where he and his team ask what is changing. “This establishes a process and a cadence to get [us] out of the day-to-day activities so we can see the bigger picture,” he explains. “We start fresh and look at what’s coming in the next quarter. We ask what we need to be prepared for. We look back and ask what’s working and what’s not. Then we set goals so we can move forward.” Goerlich says he frequently invites outside security pros, such as vendor executives and other thought leaders, to these meetings to hear their insights into evolving threats as well as emerging security tools and techniques to counteract them. He also sometimes invites his executive colleagues from within his own organization, so that they can share details on their plans and strategies — a move that helps align security with the business needs as the organization moves forward. He has seen this effort pay off. He points to actions resulting from one particular off-site where the team identified challenges around its privilege access management (PAM) process and, more specifically, the number of manual steps it required.



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain

Daily Tech Digest - September 09, 2024

Does your organization need a data fabric?

So, while real-time data integration and performing data transformations are key capabilities of data fabrics, their defining capability is in providing centralized, standardized, and governed access to an enterprise’s data sources. “When evaluating data fabrics, it’s essential to understand that they interconnect with various enterprise data sources, ensuring data is readily and rapidly available while maintaining strict data controls,” says Simon Margolis, associate CTO of AI/ML at SADA. “Unlike other data aggregation solutions, a functional data fabric serves as a “one-stop shop” for data distribution across services, simplifying client access, governance, and expert control processes.” Data fabrics thus combine features of other data governance and dataops platforms. They typically offer data cataloging functions so end-users can find and discover the organization’s data sets. Many will help data governance leaders centralize access control while providing data engineers with tools to improve data quality and create master data repositories. Other differentiating capabilities include data security, data privacy functions, and data modeling features.


The Crucial Role of Manual Data Annotation and Labeling in Building Accurate AI Systems

Automatic annotation systems frequently suffer from severe limitations, most notably accuracy. Despite its rapid evolution, AI can still misunderstand context, fail to spot complex patterns, and perpetuate inherent biases in data. For example, an automated annotation system may mislabel an image of a person holding an object because it is unable to handle complicated scenarios or objects that overlap. Similarly, in textual data, automated systems may misread cultural references, idiomatic expressions, or sentiments. ... Manual annotation, on the other hand, uses human expertise to label data, ensuring accuracy, context understanding, and bias reduction. Humans are naturally skilled at understanding ambiguity, context, and making sense of complex patterns that machines may not be able to grasp. This knowledge is critical in applications requiring absolute precision, such as healthcare diagnostics, legal document interpretation, and ethical AI deployment. Manual annotation adds a level of justice that automated procedures typically lack. Human annotators can recognize and mitigate biases in datasets, whether they be racial, gender-based, or cultural. 


AI orchestration: Crafting harmony or creating dependency?

In a collaborative relationship, both parties have an equal and complementary role. AI excels at processing enormous amounts of data, pattern recognition and certain types of analysis, while people excel at creativity, emotional intelligence and complex decision-making. In this relationship, the human keeps agency through critically evaluating AI outputs and making final decisions. However, this relationship can easily veer into dependency where we become unable or unwilling to perform tasks without AI help, even for tasks we could previously do independently. As AI outputs have become amazingly human-like and convincing, it is easy to accept them without critical evaluation or understanding, even when knowing the content may be a hallucination — an AI-generated output that appears convincing but is false or misleading. ... As AI continues to advance and become more indistinguishable from human interaction, the distinction between collaboration and dependency becomes increasingly blurred. Or worse, as leading historian Yuval Noah Harari — who is renowned for his works on the history and future of humankind points out — intimacy is a powerful weapon which can then be used to persuade us.


The deflating AI bubble is inevitable — and healthy

Predicting the future is generally a fool’s errand as Nobel Prize winning physicist, Niels Bohr recognized when he stated, “Prediction is very difficult, especially about the future.” This was particularly true in the early 1990s as the Web started to take off. Even internet pioneer and ethernet standard co-inventor Robert Metcalfe was doubtful of the internet’s viability when he predicted it had a 12-month future in 1995. Two years later, he literally ate his words at the 1997 WWW Conference when he blended a printed copy of his prediction with water and drank it. But there comes a point in a new technology when its potential benefits become clear even if the exact shape of its evolution is opaque. ... Many AI deployments and integrations are not revolutionary, however, but add incremental improvements and value to existing products and services. Graphics and presentation software provider Canva, for example, has integrated Google’s Vertex AI to streamline its video editing offering. Canva users can avoid a number of tedious editing steps to create videos in seconds rather than minutes or hours. And WPP, the global marketing services giant, has integrated Anthropic’s Claude AI service into its internal marketing system, WPP Open.


Blockchain And Quantum Computing Are On A Collision Course

Herman warns, “The real danger regarding the future of blockchain is that it’s used to build critical digital infrastructures before this serious security vulnerability has been fully investigated. Imagine a major insurance company putting at great expense all its customers into a blockchain-based network, and then three years later having to rip it all out to install a quantum-secure network, in its place.” Despite the bleak outlook, Herman offers a solution that lies within the very technology posing the threat. Quantum cryptography, particularly quantum random-number generators and quantum-resistant algorithms, could provide the necessary safeguards to protect blockchain networks from quantum attacks. “Quantum random-number generators are already being implemented today by banks, governments, and private cloud carriers. Adding quantum keys to blockchain software, and to all encrypted data, will provide unhackable security against both a classical computer and a quantum computer,” he notes. Moreover, the U.S. National Institute of Standards and Technology (NIST) has stepped in to address the issue by releasing standards for post-quantum cryptography. 


Low-Code Solutions Gain Traction In Banking And Insurance Digital Transformation

“Digital transformation should be focused on quick wins so that organizations can start seeing the ROI much sooner,” he said, noting that digital transformation is not just about adopting new technologies — it’s about fundamentally rethinking how businesses operate and deliver value to their customers. One of the recurring challenges he identified is the issue of onboarding in the banking sector. Despite variations in onboarding times from one bank to another, internal inefficiencies often cause delays. A portion of these delays stems from internal traffic rather than external factors. To address this, Arun MS advocated for a shift toward self-service portals, where customers can take control of processes like document submission. “Engaging customers as stakeholders in the process reduces internal bottlenecks and speeds up the overall timeline for onboarding,” he said. This approach not only enhances operational efficiency but also improves the customer experience, which is essential in an increasingly digital world. However, Arun MS was quick to caution that transferring processes to customers must be done thoughtfully.


Why We Need AI Professional Practice

AI’s capacity to learn, interpret, and abstract at scale alters how we navigate complex, manifestly unpredictable situations and solutions, and brings an ecosystem-scale vista of possibilities, challenges, and dependencies into view. It forces us to examine every aspect of the human condition and our increasing dependence on the tools we fashion. This is the pillar of “practice’, which will emerge from the need to harness both the immediate and indirect value advanced AI can bring. It is about direct interpretation, implementation, control, and effect, rather than indirect consideration, control, and effect. It is, in metaphorical terms then, about the rubber hitting the road.​ ... As we look at how AI will continue to shape the business landscape, we can see an element that hasn’t received much attention yet: how do we ensure that the right skills, best practices, and standards are developed and shared amongst those managing this AI revolution, and most importantly, how do we uphold the standard of that professional practice? Some voices liken the onset of AI to the invention of the Internet, which reflects the skills that are now required from staff, with new data showing that 66% of business leaders wouldn’t hire someone without AI skills.


AI cybersecurity needs to be as multi-layered as the system it’s protecting

By altering the technical design and development of AI before its training and deployment, companies can reduce their security vulnerabilities before they begin. For example, even selecting the correct model architecture has considerable implications, with each AI model exhibiting particular affinities to mitigate specific types of prompt injection or jailbreaks. Identifying the correct AI model for a given use case is important to its success, and this is equally true regarding security. Developing an AI system with embedded cybersecurity begins with how training data is prepared and processed. Training data must be sanitized and a filter to limit ingested training data is essential. Input restoration jumbles an adversary’s ability to evaluate the input-output relationship of an AI model by adding an extra layer of randomness. Companies should create constraints to reduce potential distortions of the learning model through Reject-On-Negative-Impact training. After that, regular security testing and vulnerability scanning of the AI model should be performed continuously. During deployment, developers should validate modifications and potential tampering through cryptographic checks. 


Kipu Quantum Team Says New Quantum Algorithm Outshines Existing Techniques

Kipu Quantum-led team of researchers announced the successful testing of what they’re labeling the largest quantum optimization problem on a digital quantum computer. They suggest that this is the start of the commercial quantum advantage era. ... Combinatorial optimization is critical in many industries, from logistics and scheduling to computational chemistry and biology. These problems, which involve finding the best or near-optimal solutions in large discrete configuration spaces, are known to be computationally challenging, particularly for classical computing. This complexity has driven the exploration of quantum optimization techniques as an alternative. ... While Kipu Quantum’s BF-DCQO algorithm shows promise, the results are based on simulations and experiments using specific quantum architectures. The 156-qubit experimental validation was performed on IBM’s heavy-hex processor, while the 433-qubit simulation is yet to be fully realized on physical hardware. There are still challenges in scaling the method to address more complex real-world HUBO problems that require larger quantum systems.


Inside the Mind of a Hacker: How Scams Are Carried Out

Hacking is, first and foremost, a mindset. It’s a likely avenue to pursue when you're endowed with an organized mind, a passion for IT, and a boundless curiosity about taking things apart and understanding their inner workings. Since highly publicized cases usually involve the theft of exorbitant sums, it’s logical for the public to assume that monetary gain is the top motivator. While it’s high on the list, studies that explore hacker motivation consistently rank the thrill of circumventing cyber defenses and the accompanying display of one’s mastery as chief driving forces. Hacking is both technical and creative. Successful hacks happen due to a combination of high technical prowess, the ability to grasp and implement novel solutions, and a general disregard for the consequences of those actions. ... The last step involves capitalizing on a hacker’s ill-gotten gains. Those who have managed to convince someone to transfer funds use mule accounts and money laundering schemes to eventually get a hold of them. Hackers who get their hands on a company’s industrial secrets may try to sell them to the competition. Data obtained through breaches finds its way to the dark web, where other hackers may purchase it in bulk.



Quote for the day:

"Listen with curiosity speak with honesty, act with integrity." -- Roy T. Benett