Daily Tech Digest January 17, 2025

The Architect’s Guide to Understanding Agentic AI

All business processes can be broken down into two planes: a control plane and a tools plane. See the graphic below. The tools plane is a collection of APIs, stored procedures and external web calls to business partners. However, for organizations that have started their AI journey, it could also include calls to traditional machine learning models (wave No. 1) and LLMs (wave No. 2) operating in “one-shot” mode. ... The promise of agentic AI is to use LLMs with full knowledge of an organization’s tools plane and allow them to build and execute the logic needed for the control plane. This can be done by providing a “few-shot” prompt to an LLM that has been fine-tuned on an organization’s tools plane. Below is an example of a “few-shot” prompt that answers the same hypothetical question presented earlier. This is also known as letting the LLM think slowly. ... If agentic AI still seems to be made up of too much magic, then consider the simple example below. Every developer who has to write code daily probably asks an LLM a question similar to the one below. ... Agentic AI is the next logical evolution of AI. It is based on capabilities with a solid footing in AI’s first and second waves. The promise is the use of AI to solve more complex problems by allowing them to plan, execute tasks and revise— in other words, allowing them to think slowly. This also promises to produce more accurate responses.


AI datacenters putting zero emissions promises out of reach

Datacenters' use of water and land are other bones of contention, which in combination with their reliance on tax breaks and the limited number of local jobs they deliver, will see them face growing opposition from local residents and environmental groups. Uptime highlights that many governments have set targets for GHG emissions to become net-zero by a set date, but warns that because the AI boom look set to test power availability, it will almost certainly put these pledges out of reach. ... Many governments seem convinced of the economic benefits promised by AI at the expense of other concerns, the report notes. The UK is a prime example, this week publishing the AI Opportunities Action Plan and vowing to relax planning rules to prioritize datacenter builds. ... Increasing rack power presents several challenges, the report warns, including the sheer space taken up by power distribution infrastructure such as switchboards, UPS systems, distribution boards, and batteries. Without changes to the power architecture, many datacenters risk becoming an electrical plant built around a relatively small IT room. Solving this will call for changes such as medium-voltage (over 1 kV) distribution to the IT space and novel power distribution topologies. However, this overhaul will take time to unfold, with 2025 potentially a pivotal year for investment to make this possible.


State of passkeys 2025: passkeys move to mainstream

One of the critical factors driving passkeys into mainstream is the full passkey-readiness of devices, operating systems and browsers. Apple (iOS, macOS, Safari), Google (Android, Chrome) and Microsoft (Windows, Edge) have fully integrated passkey support across their platforms: Over 95 percent of all iOS & Android devices are passkey-ready; and Over 90 percent of all iOS & Android devices have passkey functionality enabled. With Windows soon supporting synced passkeys, all major operating systems ensure users can securely and effortlessly access their credentials across devices. ... With full device support, a polished UX, growing user familiarity, and a proven track record among early adopter implementations, there’s no reason for businesses to delay adopting passkeys. The business advantages of passkeys are compelling. Companies that previously relied on SMS-based authentication can save considerably on SMS costs. Beyond that, enterprises adopting passkeys benefit from reduced support overhead (since fewer password resets are needed), lower risk of breaches (thanks to phishing-resistance), and optimized user flows that improve conversion rates. Collectively, these perks make a convincing business case for passkeys.


Balancing usability and security in the fight against identity-based attacks

AI and ML are a double-edged sword in cybersecurity. On one hand, cybercriminals are using these technologies to make their attacks faster and wiser. They can create highly convincing phishing emails, generate deepfake content, and even find ways to bypass traditional security measures. For example, generative AI can craft emails or videos that look almost real, tricking people into falling for scams. On the flip side, AI and ML are also helping defenders. These technologies allow security systems to quickly analyze vast amounts of data, spotting unusual behavior that might indicate compromised credentials. ... Targeted security training can be useful but generally you want to reduce the human dependency as much as possible. This is why controls that can meet a user where they are at is critical. If you can deliver point-in-time guidance, or straight up technically prevent something like a user entering their password into a phishing site, it significantly reduces the dependency on the human to make the right decision unassisted every time. When you consider how hard it can be for even security professionals to spot the more sophisticated phishing sites, it’s essential that we help people out as much as possible with technical controls.


Understanding Leaderless Replication for Distributed Data

Leaderless replication is another fundamental replication approach for distributed systems. It alleviates problems of multi-leader replication while, at the same time, it introduces its own problems. Write conflicts in multi-leader replication are tackled in leaderless replication with quorum-based writes and systematic conflict resolution. Cascading failures, synchronization overhead, and operational complexity can be handled in leaderless replication via its decentralized architecture. Removing leaders can simplify cluster management, failure handling,g and recovery mechanisms. Any replica can handle writes/reads. ... Direct writes, and coordination-based replication are the most common approaches in leaderless replication. In the first approach, clients write directly to node replicas, while in the second approach, there exist coordinator-mediated writes. It is worth mentioning that, unlike the leader-follower concept, coordinators in leaderless replication do not enforce a particular ordering of writes. ... Failure handling is one of the most challenging aspects of both approaches. While direct writes provide better theoretical availability, they can be problematic during failure scenarios. Coordinator-based systems can provide clearer failure semantics but at the cost of potential coordinator bottlenecks.


Blockchain in Banking: Use Cases and Examples

Bitcoin has entered a space usually reserved for gold and sovereign bonds: national reserves. While the U.S. Federal Reserve maintains that it cannot hold Bitcoin under current regulations, other financial systems are paying close attention to its potential role as a store of value. On the global stage, Bitcoin is being viewed not just as a speculative asset but as a hedge against inflation and currency volatility. Governments are now debating whether digital assets can sit alongside gold bars in their vaults. Behind all this activity lies blockchain - providing transparency, security, and a framework for something as ambitious as a digital reserve currency. ... Financial assets like real estate, investment funds, or fine art are traditionally expensive, hard to divide, and slow to transfer. Blockchain changes this by converting these assets into digital tokens, enabling fractional ownership and simplifying transactions. UBS launched its first tokenized fund on the Ethereum blockchain, allowing investors to trade fund shares as digital assets. This approach reduces administrative costs, accelerates settlements, and improves accessibility for investors. Additionally, one of Central and Eastern Europe’s largest banks has tokenized fine art on Aleph Zero blockchain. This enables fractional ownership of valuable art pieces while maintaining verifiable proof of ownership and authenticity.


Decentralized AI in Edge Computing: Expanding Possibilities

Federated learning enables decentralized training of AI models directly across multiple edge devices. This approach eliminates the need to transfer raw data to a central server, preserving privacy and reducing bandwidth consumption. Models are trained locally, with only aggregated updates shared to improve the global system. ... Localized data processing empowers edge devices to conduct real-time analytics, facilitating faster decision-making and minimizing reliance on central frameworks. This capability is fundamental for applications such as autonomous vehicles and industrial automation, where even milliseconds can be vital. ... Blockchain technology is pivotal in decentralized AI for edge computing by providing a secure, immutable ledger for data sharing and task execution across edge nodes. It ensures transparency and trust in resource allocation, model updates, and data verification processes. ... By processing data directly at the edge, decentralized AI removes the delays in sending data to and from centralized servers. This capability ensures faster response times, enabling near-instantaneous decision-making in critical real-time applications. ... Decentralized AI improves privacy protocols by empowering the processing of sensitive information locally on the device rather than sending it to external servers.


The Myth of Machine Learning Reproducibility and Randomness

The nature of ML systems contributes to the challenge of reproducibility. ML components implement statistical models that provide predictions about some input, such as whether an image is a tank or a car. But it is difficult to provide guarantees about these predictions. As a result, guarantees about the resulting probabilistic distributions are often given only in limits, that is, as distributions across a growing sample. These outputs can also be described by calibration scores and statistical coverage, such as, “We expect the true value of the parameter to be in the range [0.81, 0.85] 95 percent of the time.” ... There are two basic techniques we can use to manage reproducibility. First, we control the seeds for every randomizer used. In practice there may be many. Second, we need a way to tell the system to serialize the training process executed across concurrent and distributed resources. Both approaches require the platform provider to include this sort of support. ... Despite the importance of these exact reproducibility modes, they should not be enabled during production. Engineering and testing should use these configurations for setup, debugging and reference tests, but not during final development or operational testing.


The High-Stakes Disconnect For ICS/OT Security

ICS technologies, crucial to modern infrastructure, are increasingly targeted in sophisticated cyber-attacks. These attacks, often aimed at causing irreversible physical damage to critical engineering assets, highlight the risks of interconnected and digitized systems. Recent incidents like TRISIS, CRASHOVERRIDE, Pipedream, and Fuxnet demonstrate the evolution of cyber threats from mere nuisances to potentially catastrophic events, orchestrated by state-sponsored groups and cybercriminals. These actors target not just financial gains but also disruptive outcomes and acts of warfare, blending cyber and physical attacks. Additionally, human-operated Ransomware and targeted ICS/OT ransomware pose concerns being on the rise in recent times. ... Traditional IT security measures, when applied to ICS/OT environments, can provide a false sense of security and disrupt engineering operations and safety. Thus, it is important to consider and prioritize the SANS Five ICS Cybersecurity Critical Controls. This freely available whitepaper sets forth the five most relevant critical controls for an ICS/OT cybersecurity strategy that can flex to an organization's risk model and provides guidance for implementing them.


Execs are prioritizing skills over degrees — and hiring freelancers to fill gaps

Companies are adopting more advanced approaches to assessing potential and current employee skills, blending AI tools with hands-on evaluations, according to Monahan. AI-powered platforms are being used to match candidates with roles based on their skills, certifications, and experience. “Our platform has done this for years, and our new UMA (Upwork’s Mindful AI) enhances this process,” she said. Gartner, however, warned that “rapid skills evolutions can threaten quality of hire, as recruiters struggle to ensure their assessment processes are keeping pace with changing skills. Meanwhile, skills shortages place more weight on new hires being the right hires, as finding replacement talent becomes increasingly challenging. Robust appraisal of candidate skills is therefore imperative, but too many assessments can lead to candidate fatigue.” ... The shift toward skills-based hiring is further driven by a readiness gap in today’s workforce. Upwork’s research found that only 25% of employees feel prepared to work effectively alongside AI, and even fewer (19%) can proactively leverage AI to solve problems. “As companies navigate these challenges, they’re focusing on hiring based on practical, demonstrated capabilities, ensuring their workforce is agile and equipped to meet the demands of a rapidly evolving business landscape,” Monahan said.



Quote for the day:

“If you set your goals ridiculously high and it’s a failure, you will fail above everyone else’s success.” -- James Cameron

Daily Tech Digest - January 16, 2025

How DPUs Make Collaboration Between AppDev and NetOps Essential

While GPUs have gotten much of the limelight due to AI, DPUs in the cloud are having an equally profound impact on how applications are delivered and network functions are designed. The rise of DPU-as-a-Service is breaking down traditional silos between AppDev and NetOps teams, making collaboration essential to fully unlock DPU capabilities. DPUs offload network, security, and data processing tasks, transforming how applications interact with network infrastructure. AppDev teams must now design applications with these offloading capabilities in mind, identifying which tasks can benefit most from DPUs—such as real-time data encryption or intensive packet processing. ... AppDev teams must explicitly design applications to leverage DPU-accelerated encryption, while NetOps teams need to configure DPUs to handle these workloads efficiently. This intersection of concerns creates a natural collaboration point. The benefits of this collaboration extend beyond security. DPUs excel at packet processing, data compression, and storage operations. When AppDev and NetOps teams work together, they can identify opportunities to offload compute-intensive tasks to DPUs, dramatically improving application performance. 


The CFO may be the CISO’s most important business ally

“Cybersecurity is an existential threat to every company. Gone are the days where CFOs could only be fired if they ran out of money, cooked the books, or had a major controls outage,” he said. “Lack of adequate resourcing of cybersecurity is an emerging threat to their very existence.” This sentiment reflects the reality that for most organizations cyber threat is the No. 1 business risk today, and this has significant implications for the strategic survival of the enterprise. It’s time for CISOs and CFOs to address the natural barriers to their relationship and develop a strategic partnership for the good of the company. ... CISOs should be aware of a few key strategies for improving collaboration with their CFO counterparts. The first is reverse mentoring. Because CFOs and CISOs come from differing perspectives and lead domains rife with terminology and details that can be quite foreign to the other, reverse mentoring can be important for building a bridge between the two. In such a relationship, the CISO can offer insights into cybersecurity, while simultaneously learning to communicate in the CFO’s financial language. This mutual learning creates a more aligned approach to organizational risk. Second, CISOs must also develop their commercial perspective.


Establishing a Software-Based, High-Availability Failover Strategy for Disaster Mitigation and Recovery

No one should be surprised that cloud services occasionally go offline. If you think of the cloud as “someone else’s computer,” then you recognize there are servers and software behind it all. Someone else is doing their best to keep the lights on in the face of events like human error, natural disasters, and DDoS and other types of cyberattacks. Someone else is executing their disaster response and recovery plan. While the cloud may well be someone else’s computer, when there is a cloud outage that affects your operations, it is your problem. You are at the mercy of someone else to restore services so you can get back online. It doesn’t have to be that way. Cloud-dependent organizations can adopt strategies that allow them to minimize the risk someone else’s outage will knock them offline. One such strategy is to take advantage of hybrid or multi-cloud architecture to achieve operational resiliency and high availability through service redundancy through SANless clustering. Normally a storage area network (SAN) uses local storage to configure clustered nodes on-premises, in the cloud, and to a disaster recovery site. It’s a proven approach, but because it is hardware dependent, it is costly in terms of dollars and computing resources, and comes with additional management demands.


Trusted Apps Sneak a Bug Into the UEFI Boot Process

UEFI is a kind of sacred space — a bridge between firmware and operating system, allowing a machine to boot up in the first place. Any malware that invades this space will earn a dogged persistence through reboots, by reserving its own spot in the startup process. Security programs have a harder time detecting malware at such a low level of the system. Even more importantly, by loading first, UEFI malware will simply have a head start over those security checks that it aims to avoid. Malware authors take advantage of this order of operations by designing UEFI bootkits that can hook into security protocols, and undermine critical security mechanisms like UEFI Secure Boot or HVCI, Windows' technology for blocking unsigned code in the kernel. To ensure that none of this can happen, the UEFI Boot Manager verifies every boot application binary against two lists: "db," which includes all signed and trusted programs, and "dbx," including all forbidden programs. But when a vulnerable binary is signed by Microsoft, the matter is moot. Microsoft maintains a list of requirements for signing UEFI binaries, but the process is a bit obscure, Smolár says. "I don't know if it involves only running through this list of requirements, or if there are some other activities involved, like manual binary reviews where they look for not necessarily malicious, but insecure behavior," he says.


How CISOs Can Build a Disaster Recovery Skillset

In a world of third-party risk, human error, and motivated threat actors, even the best prepared CISOs cannot always shield their enterprises from all cybersecurity incidents. When disaster strikes, how can they put their skills to work? “It is an opportunity for the CISO to step in and lead,” says Erwin. “That's the most critical thing a CISO is going to do in those incidents, and if the CISO isn't capable doing that or doesn't show up and shape the response, well, that's an indication of a problem.” CISOs, naturally, want to guide their enterprises through a cybersecurity incident. But disaster recovery skills also apply to their own careers. “I don't see a world where CISOs don't get some blame when an incident happens,” says Young. There is plenty of concern over personal liability in this role. CISOs must consider the possibility of being replaced in the wake of an incident and potentially being held personally responsible. “Do you have parachute packages like CEOs do in their corporate agreements for employability when they're hired?” Young asks. “I also see this big push of not only … CISOs on the D&O insurance, but they're also starting to acquire private liability insurance for themselves directly.”


Site Reliability Engineering Teams Face Rising Challenges

While AI adoption continues to grow, it hasn't reduced operational burdens as expected. Performance issues are now considered as critical as complete outages. Organizations are also grappling with balancing release velocity against reliability requirements. ... Daoudi suspects that there are a series of contributing factors that have led to the unexpected rise in toil levels. The first is AI systems maintenance: AI systems themselves require significant maintenance, including updating models and managing GPU clusters. AI systems also often need manual supervision due to subtle and hard-to-predict errors, which can increase the operational load. Additionally, the free time created by expediting valuable activities through AI may end up being filled with toilsome tasks, he said. "This trend could impact the future of SRE practices by necessitating a more nuanced approach to AI integration, focusing on balancing automation with the need for human oversight and continuous improvement," Daoudi said. Beyond AI, Daoudi also suspects that organizations are incorrectly evaluating toolchain investments. In his view, despite all the investments in inward-focused application performance management (APM) tools, there are still too many incidents, and the report shows a sentiment for insufficient observability instrumentation.


The Hidden Cost of Open Source Waste

Open source inefficiencies impact organizations in ways that go well beyond technical concerns. First, they drain productivity. Developers spend as much as 35% of their time untangling dependency issues or managing vulnerabilities — time that could be far better spent building new products, paying down technical debt, or introducing automation to drive cost efficiencies. ... Outdated dependencies compound the challenge. According to the report, 80% of application dependencies remain un-upgraded for over a year. While not all of these components introduce critical vulnerabilities, failing to address them increases the risk of undetected security gaps and adds unnecessary complexity to the software supply chain. This lack of timely updates leaves development teams with mounting technical debt and a higher likelihood of encountering issues that could have been avoided. The rapid pace of software evolution adds another layer of difficulty. Dependencies can become outdated in weeks, creating a moving target that’s hard to manage without automation and actionable insights. Teams often play catch-up, deepening inefficiencies and increasing the time spent on reactive maintenance. Automation helps bridge this gap by scanning for risks and prioritizing high-impact fixes, ensuring teams focus on the areas that matter most.


The Virtualization Era: Opportunities, Challenges, and the Role of Hypervisors

Choosing the most appropriate hypervisor requires thoughtful consideration of an organization’s immediate needs and long-term goals. Scalability is a crucial factor, as the selected solution must address current workloads and seamlessly adapt to future demands. A hypervisor that integrates smoothly with an organization’s existing IT infrastructure reduces the risks of operational disruptions and ensures a cost-effective transition. Equally important is the financial aspect, where businesses must look beyond the initial licensing fees to account for potential hidden costs, such as staff training, ongoing support, and any necessary adjustments to workflows. The quality of support the vendor provides, coupled with the strength of the user community, can significantly influence the overall experience, offering critical assistance during implementation and beyond. For many businesses, partnering with Managed Service Providers (MSPs) brings an added layer of expertise, ensuring that the chosen solution delivers maximum value while minimizing risk. The ongoing evolution and transformation of the virtualization market presents both challenges and opportunities. As the foundation for IT efficiency and flexibility, hypervisors remain central to these changes.

 

DORA’s Deadline Looms: Navigating the EU’s Mandate for Threat Led Penetration Testing

It’s hard to defend yourself, if you have no idea what you’re up against, and history and countless news stories are evidence that trying to defend against all manner of digital threat is a fool’s errand. As such, the first step to approaching DORA compliance is profiling not only the threat actors that target the financial services sector, but specifically which actors, and by what Tactics Techniques and Procedures (TTPs), you are likely to be attacked. However, first before you can determine how an actor may view and approach you, you need to know who you are. So, the first profile that must be built is of your own business. Not just financial services, but what sector/aspect, what region, and finally what is the specific risk profile based on the critical assets in organizational, and even partner, infrastructures. The second profile begins with the current population of known actors that target the financial services industry. It then moves to narrowing to the actors known to be aligned with the specific targeting profile. From there, leveraging industry standard models such as the MITRE ATT&CK framework, a graph is created of each actor/group’s understood goals and TTPs, including their traditional and preferred methods of access and exploitation, as well as their capabilities for evasion, persistence and command and control.


With AGI looming, CIOs stay the course on AI partnerships

“The immediate path for CIOs is to leverage gen AI for augmentation rather than replacement — creating tools that help human teams make smarter, faster decisions,” Nardecchia says. “There are very promising results with causal AI and AI agents that give an autonomous-like capability and most solutions still have a human in the loop.” Matthew Gunkel, CIO of IT Solutions at the University of California at Riverside, agrees that IT organizations should keep moving forward regardless of the growing delta between AI technology milestones and actual AI implementations. ... “The rapid advancements in AI technology, including projections for AGI and ACI, present a paradox: While the technology races ahead, enterprise adoption remains in its infancy. This divergence creates both challenges and opportunities for CIOs, employees, and AI vendors,” Priest says. “Rather than speculating on when AGI/ACI will materialize, CIOs would be best served to focus on what preparation is required to be ready for it and to maximize the value from it.” Sid Nag, vice president at Gartner, agrees that CIOs should train their attention on laying the foundation for AI and addressing important matters such as privacy, ethics, legal issues, and copyright issues, rather than focus on AGI advances.



Quote for the day:

"When you practice leadership,The evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani

Daily Tech Digest - January 15, 2025

Passkeys: they're not perfect but they're getting better

Users are largely unsure about the implications for their passkeys if they lose or break their device, as it seems their device holds the entire capability to authenticate. To trust passkeys as a replacement for the password, users need to be prepared and know what to do in the event of losing one – or all – of their devices. ... Passkeys are ‘long life’ because users can’t forget them or create one that is weak, so if they’re done well there should be no need to reset or update them. As a result, there’s an increased likelihood that at some point a user will want to move their passkeys to the Credential Manager of a different vendor or platform. This is currently challenging to do, but FIDO and vendors are actively working to address this issue and we wait to see support for this take hold across the market. ... For passkey-protected accounts, potential attackers are now more likely to focus on finding weaknesses in account recovery and reset requests – whether by email, phone or chat – and pivot to phishing for recovery keys. These processes need to be sufficiently hardened by providers to prevent trivial abuse by these attackers and to maintain the security benefits of using passkeys. Users also need to be educated on how to spot and report abuse of these processes before their accounts are compromised.


Securing Payment Software: How the PCI SSF Modular System Enhances Flexibility and Security

The framework was introduced to replace the aging Payment Application Data Security Standard (PA-DSS), which primarily focused on payment application security. As software development technologies and methodologies rapidly evolved, the need for a dynamic and adaptable security standard became increasingly apparent. Consequently, this realization prompted the creation of the PCI SSF. As a result, the PCI SSF encompasses a broader range of security requirements specifically tailored for modern software environments. ... The modular system of the PCI SSF is specifically designed to offer both flexibility and scalability, thereby enabling organizations to address their specific security needs based on their unique software environments. In addition, the modular approach allows organizations to select and implement only the components relevant to their software, which, in turn, simplifies the process of achieving and maintaining compliance. ... The PCI SSF’s modular system marks a transformative step in payment software security, effectively balancing adaptability with comprehensive protection against evolving cyber threats. Moreover, its flexible, scalable, and comprehensive approach allows organizations to tailor their security efforts to their unique needs, thereby ensuring robust protection for payment data.


The cloud cost wake-up call I predicted

Cloud computing starts as a flexible and budget-friendly option, especially with its enticing pay-per-use model. However, unchecked growth can turn this dream into a financial nightmare due to the complexities the cloud introduces. According to the Flexera State of the Cloud Report, 87% of organizations have adopted multicloud strategies, complicating cost management even more by scattering workloads and expenses across various platforms. The rise of cloud-native applications and microservices has further complicated cost management. These systems abstract physical resources, simplifying development but making costs harder to predict and control. Recent studies have revealed that 69% of CPU resources in container environments go unused, a direct contradiction of optimal cost management practices. Although open-source tools like Prometheus are excellent for tracking usage and spending, they often fall short as organizations scale. ... A critical component of effective cloud cost management is demystifying cloud pricing models. Providers often lay out their pricing structures in great detail, but translating them into actual costs can be difficult. A lack of understanding can lead to spiraling costs.


Using cognitive diversity for stronger, smarter cyber defense

Cognitive biases significantly influence decision-making during cybersecurity incidents by framing how individuals interpret information, assess risks, and respond to threats. ... Integrating cognitive science into cybersecurity tools involves understanding how human cognitive processes – such as perception, memory, decision-making, and problem-solving – affect security tasks. Designing user-friendly tools requires aligning cognitive models with diverse user behaviors while managing cognitive load, ensuring usability without compromising security, and adapting to the fast-changing cybersecurity landscape. Interfaces must cater to varying skill levels, promote awareness, and support effective decision-making, all while addressing ethical considerations like privacy and bias. Interdisciplinary collaboration between psychology, computer science, and cybersecurity experts is essential but challenging due to differences in expertise and communication styles. ... Cognitive diversity can frequently divert resources or distract from present, immediate or emerging threats. Focus on the things that are likely to happen. Implement defensive measures which require little resource while more complex measures are prioritized.


Next-gen Ethernet standards set to move forward in 2025

Beyond the big-ticket items of higher bandwidth and AI, a key activity in any year for Ethernet is interoperability testing for all manner of existing and emerging specifications. 200 Gigabits per second per lane is an important milestone on the path to an even higher bandwidth Ethernet specification that will exceed 1 Terabit per second. ... With 800GbE now firmly established, adoption and expansion into ever larger bandwidth will be a key theme in 2025. There will be no shortage of vendors offering 800 GbE equipment in 2025, but when it comes to Ethernet standards, focus will be on 1.6 Terabits/second Ethernet. “As 800GbE has come to market, the next speed for Ethernet is being talked about already,” Martin Hull, vice president and general manager for cloud and AI platforms at Arista Networks, told Network World. “1.6Tb Ethernet is being discussed in terms of the optics, the form factors and use cases, and we expect industry leaders to be trialing 1.6T systems towards the end of 2025.” ... “High-speed computing requires high bandwidth and reliable interconnect solutions,” Rodgers said. “However, high-speed also means high power and higher heat, placing more demands on the electrical grid and resources and creating a demand for new options.” That’s where LPOs will fit in.


Stop wasting money on ineffective threat intelligence: 5 mistakes to avoid

“CTI really needs to fall underneath your risk management and if you don’t have a risk management program you need to identify that (as a priority),” says Ken Dunham, cyber threat director for the Qualys Threat Research Unit. “It really should come down to: what are the core things you’re trying to protect? Where are your crown jewels or your high value assets?” Without risk management to set those priorities, organizations will not be able to appropriately set requirements for intelligence collection that will have them gather the kind of relevant sources that pertain to their most valuable assets. ... Bad intelligence can often be worse than none, leading to a lot of time wasted by analysts to validate and contextualize poor quality feeds. Even worse, if this work isn’t done appropriately, poor quality data could potentially even lead to misguided choices at the operational or strategic level. Security leaders should be tasking their intelligence team with regularly reviewing the usefulness of their sources based on a few key attributes. ... Even if CTI is doing an excellent job collecting the right kind of quality intelligence that its stakeholders are asking for, all that work can go for naught if it isn’t appropriately routed to the people that need it — in the format that makes sense for them.


Exposure Management: A Strategic Approach to Cyber Security Resource Constraint

XM is a proactive and integrated approach that provides a comprehensive view of potential attack surfaces and prioritises security actions based on an organisation’s specific context. It’s a process that combines cloud security posture, identity management, internal hosts, internet-facing hosts and threat intelligence into a unified framework, enabling security teams to anticipate potential attack vectors and fortify their defences effectively. Unlike traditional security measures, XM takes an “outside-in” approach, assessing how attackers might exploit vulnerabilities across interconnected systems. This shift in mindset is crucial for identifying and prioritising the most significant threats. By focusing on the most critical vulnerabilities and potential attack paths, XM allows security teams to allocate resources more efficiently and enhance their overall security posture. ... By providing a unified view of the entire attack path, XM improves an organisation’s ability to manage security risks. This unified view allows security teams to understand how vulnerabilities can be exploited and prioritise those that pose the greatest risk. Security teams are then able to guarantee efficient resource allocation and focus on threats with the most significant impact on business operations.


How GenAI is Exposing the Limits of Data Centre Infrastructure

Energy intensive Graphics Processing Units (GPUs) that power AI platforms require five to 10 times more energy than Central Processing Units (CPUs), because of the larger number of transistors. This is already impacting data centres. There are also new, cost-effective design methodologies incorporating features such as 3D silicon stacking, which allows GPU manufacturers to pack more components into a smaller footprint. This again increases the power density, meaning data centres need more energy, and create more heat. Another trend running in parallel is a steady fall in TCase (or Case Temperature) in the latest chips. TCase is the maximum safe temperature for the surface of chips such as GPUs. It is a limit set by the manufacturer to ensure the chip will run smoothly and not overheat, or require throttling which impacts performance. On newer chips, T Case is coming down from 90 to 100 degrees Celsius to 70 or 80 degrees, or even lower. This is further driving the demand for new ways to cool GPUs. As a result of these factors, air cooling is no longer doing the job when it comes to AI. It is not just the power of the components, but the density of those components in the data centre. Unless servers become three times bigger than they were before, efficient heat removal is needed. 


The Configuration Crisis and Developer Dependency on AI

As our IT infrastructure grows ever more modular, layered and interconnected, we deal with myriad configurable parts — each one governed by a dense thicket of settings. All of our computers — whether in our pockets, on our desks or in the cloud — have a bewildering labyrinth of components with settings to discover and fiddle with, both individually and in combination. ... A couple of strategies I’ve mentioned before bear repeating. One is the use of screenshots, which are now a powerful index in the corpus of synthesized knowledge. Like all forms of web software, the cloud platforms’ GUI consoles present a haphazard mix of UX idioms. A maneuver that is conceptually the same across platforms will often be expressed using very different affordances. ... A couple of strategies I’ve mentioned before bear repeating. One is the use of screenshots, which are now a powerful index in the corpus of synthesized knowledge. Like all forms of web software, the cloud platforms’ GUI consoles present a haphazard mix of UX idioms. A maneuver that is conceptually the same across platforms will often be expressed using very different affordances. AIs are pattern recognizers that can help us see and work with the common underlying patterns.


From project to product: Architecting the future of enterprise technology

Modern enterprise architecture requires thinking like an urban planner rather than a building inspector. This means creating environments that enable innovation while ensuring system integrity and sustainability. ... Just as urban planners need to develop a shared vocabulary with city officials, developers and citizens, enterprise architects must establish a common language that bridges technical and business domains. Complex ideas that remain purely verbal often get lost or misunderstood. Documentation and diagrams transform abstract discussions into something tangible. By articulating fitness functions — automated tests tied to specific quality attributes like reliability, security or performance — teams can visualize and measure system qualities that align with business goals. ... Technology governance alone will often just inform you of capability gaps, tech debt and duplication — this could be too late! Enterprise architects must shift their focus to business enablement. This is much more proactive in understanding the business objectives and planning and mapping the path for delivery. ... Just as cities must evolve while preserving their essential character, modern enterprise architecture requires built-in mechanisms for sustainable change. 



Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein

Daily Tech Digest - January 14, 2025

Why Your Business May Want to Shift to an Industry Cloud Platform

Industry cloud services typically embed the data model, processes, templates, accelerators, security constructs, and governance controls required by the adopter's industry, says Shriram Natarajan, a director at technology research and advisory firm ISG, in an online interview. "This [approach] allows faster development of new functionality, better security and governance, and an enhanced and user/stakeholder experience." ... Enterprises spanning many industries can benefit significantly by moving to an industry cloud platform, Campbell says. "Businesses that are faced with many regulations and operational requirements can especially benefit from the specialized services industry cloud platforms," he notes, adding that many industry cloud platforms are preconfigured to meet specific needs, which can help accelerate the time to value realized. Many enterprises have a blinkered view on verticalized solutions, Natarajan says. "They tend to see the platforms they already have in-house and look for solutions that these platforms provide." He believes that enterprise IT and business teams can both benefit from looking at the landscape of verticalized industry cloud platforms.


FRAML Reality Check: Is Full Integration Really Practical?

While integration between AML and fraud teams is a desirable goal, experts say it should not be viewed as the best solution. Paul Dunlop, insider risk consultant at a financial services firm, stressed the importance of collaboration over integration. "I am against the oversimplification of fraud and AML integration. Banking risks are multifaceted, involving not just fraud and AML but also cybersecurity, privacy and other domains," Dunlop said. "Integration decision should be assessed based on the bank's maturity level, regulatory environment and unique operational needs." "Cost should not be the sole factor behind this decision. One must assess operational and risk management trade-offs," he said. Meng Liu, senior analyst at Forrester, said that despite AML and fraud being two distinct functions at present, the trend toward more consolidated and integrated financial crime management is real. ... Despite the differences in fraud and AML teams, some use cases, such as scams, human trafficking and child exploitation, cry out for better collaboration, Mitchell said. "These require shared data and aligned strategies." But high-volume fraud detection such as check and card fraud is less suited for joint efforts due to operational complexity.


Ransomware abuses Amazon AWS feature to encrypt S3 buckets

In the attacks by Codefinger, the threat actors used compromised AWS credentials to locate victim's keys with 's3:GetObject' and 's3:PutObject' privileges, which allow these accounts to encrypt objects in S3 buckets through SSE-C. The attacker then generates an encryption key locally to encrypt the target's data. Since AWS doesn't store these encryption keys, data recovery without the attacker's key is impossible, even if the victim reports unauthorized activity to Amazon. "By utilizing AWS native services, they achieve encryption in a way that is both secure and unrecoverable without their cooperation," explains Halcyon. Next, the attacker sets a seven-day file deletion policy using the S3 Object Lifecycle Management API and drops ransom notes on all affected directories that instruct the victim to pay ransom on a given Bitcoin address in exchange for the custom AES-256 key. ... Halcyon also suggests that AWS customers set restrictive policies that prevent the use of SSE-C on their S3 buckets. Concerning AWS keys, unused keys should be disabled, active ones should be rotated frequently, and account permissions should be kept at the minimum level required.


How AI and ML are transforming digital banking security

By continuously learning from new data, ML improves over time, adapting to the organization’s needs and the ever-evolving fraud tactics. This supports reducing false positives, ensuring legitimate transactions proceed smoothly while maintaining security. Predictive analytics also help identify potential threats before they materialize, and fraud scoring prioritizes high-risk activities for action. AI/ML-powered systems are scalable and effective against sophisticated threats, such as synthetic identity fraud and account takeovers, and can monitor multiple banking channels simultaneously. They automate detection, lowering operational costs, and providing seamless customer experiences, thereby enhancing trust. However, nothing is a silver bullet and considerations must be made to things such as algorithm bias, data privacy concerns, and the need for explainable models persist. Still, despite these potential hurdles, AI and ML are reshaping digital banking security, equipping financial institutions with proactive tools to counter fraud while safeguarding customer trust and regulatory compliance. ... Advanced technologies like AI and ML are helping institutions monitor transactions in real time, detecting anomalies and preventing fraud without directly involving users. Meanwhile, encryption and tokenization protect sensitive data, ensuring transactions remain secure in the background.


The Evolution of Business Systems in the Digital Era

Systems of Record (SORs) serve as the foundation of organizational infrastructure, storing essential data such as customer information, financial transactions, and operational processes. These systems are designed to maintain structured and reliable records, ensuring data integrity, compliance, and security. They play a critical role in regulatory reporting, audits, and operational consistency. ... Systems of Engagement (SOEs) are the digital front doors of modern businesses, facilitating seamless and interactive communication with customers and employees. They go beyond simple data storage and retrieval, focusing on creating dynamic and personalized experiences across various channels. SOEs prioritize customer-centric approaches, ensuring businesses can deliver dynamic and interactive communication. ... Systems of Intelligence (SOIs) represent the pinnacle of data-driven decision making. Built upon the foundation of Systems of Record (SORs) and Systems of Engagement (SOEs), SOIs leverage the power of artificial intelligence (AI) and machine learning (ML) to transform raw data into actionable insights. Unlike their predecessors, SOIs go beyond simply identifying patterns and trends. They possess the ability to predict future outcomes and even prescribe optimal courses of action.


Gen AI strategies put CISOs in a stressful bind

One of the most problematic gen AI issues CISOs face is how casual many gen AI vendors are being when selecting the data used to train their models, Townsend said. “That creates a security risk for the organization.” ... generative AI’s penetration into SaaS solutions makes this more problematic. “The attack surface for gen AI has changed. It used to be enterprise users using foundation models provided by the biggest providers. Today, hundreds of SaaS applications have embedded LLMs that are in use across the enterprise,” said Routh, who today serves as chief trust officer at security vendor Saviynt. “Software engineers have more than 1 million open source LLMs at their disposal on HuggingFace.com.” ... All this can take a psychological toll on CISOs, Townsend surmised. “When they feel overwhelmed, they shut down,” he said. “They do what they feel they can, and they will ignore what they feel that they can’t control.” ... “The bad actors are feverishly working to exploit these new technologies in malicious ways, so the CISOs are right to be concerned about how these new gen AI solutions and systems can be exploited,” Taylor said. 


How Enterprises and Startups Can Master AI With Smarter Data Practices

For enterprises, however, supplying AI systems with the data they need to thrive is more complicated by several orders of magnitude. There are two main reasons for this: First, enterprises don’t have the same information aggregation ability in the consumer AI world. Consumer AI companies can use any public data on the web to train their AI models; think of it as an entire continent of information to which they have unfettered access. On the other hand, enterprise data exists within minor, disparate, and oftentimes disconnected information archipelagos. Additionally, enterprises are working with many types of data, including relational data from operational systems, decades of poorly organized folders of documents, and audio and numeric data from payroll and financial systems. Further, enterprises must contend with additional layers of regulatory complexity regarding handling personal and private data. To build impactful AI tools, an enterprise’s algorithms must be fed or trained on specific data sets that span multiple sources, including the company’s human resources, finance, customer relationship management, supply chain management, and other systems.


Yes, you should use AI coding assistants—but not like that

AI is a must for software developers, but not because it removes work. Rather, it changes how developers should work. For those who just entrust their coding to a machine, well, the results are dire. ... Use AI wrong and things get worse, not better. Stanford researcher Yegor Denisov-Blanch notes that his team has found that AI increases both the amount of code delivered and the amount of code that needs reworking, which means that “actual ‘useful delivered code’ doesn’t always increase” with AI. In short, “some people manage to be less productive with AI.” So how do you ensure you get more done with coding assistants, not less? ... Here’s the solution: If you want to use AI coding assistants, don’t use them as an excuse not to learn to code. The robots aren’t going to do it for you. The engineers who will get the most out of AI assistants are those who know software best. They’ll know when to give control to the coding assistant and how to constrain that assistance (perhaps to narrow the scope of the problem they allow it to work on). Less-experienced engineers run the risk of moving fast but then getting stuck or not recognizing the bugs that the AI has created. ... AI can’t replace good programming, because it really doesn’t do good programming.


AI Tools Amplify API Security Threats Worldwide

The financial implications of API breaches prove substantial. According to Kong's report, 55% of organizations experienced an API security incident in the past year. Among those affected, 47% reported remediation costs exceeding $100,000, while 20% faced expenses surpassing $500,000. Gartner's research underscores this urgency, highlighting that API breaches typically result in ten times more leaked data than other types of security incidents. ... While AI technologies, particularly LLMs, drive unprecedented innovation, they introduce new vulnerabilities. These advanced tools enable attackers to exploit shadow APIs, bypass traditional defenses and manipulate API traffic in unexpected ways. The survey indicates that 84% of leaders predict AI and LLMs will increase the complexity of securing APIs over the next two to three years, emphasizing the need for immediate action. Despite 92% of organizations implementing measures to secure their APIs, 40% of leaders remain skeptical about whether their investments will adequately counter AI-driven risks. The regional disparity in preparedness stands out: 13% of U.S. organizations acknowledge taking no specific measures against AI threats, compared to 4% in the U.K.


From AI Assistants to Swarms of Thousands of Collaborating AI Agents: Is Your Architecture Ready?

Agentic AI is likely to create most issues in some areas more than others. The Agentic Architecture Framework identifies seven areas that will require more support in the forms of new or updated frameworks, tools and techniques to support Agentic AI capability-building and architecture development. ... Agentic AI Strategy begins with defining a clear target state across the Agentic AI maturity dimensions and levels. This step establishes the organization’s AI aspirations and provides a benchmark for future transformation. Once the target state is identified, the next step involves conducting a GAP analysis to determine the differences between current capabilities in the previous step, and the organization’s ambition. With these gaps clarified, organizations can then focus on identifying and quantifying high-impact AI use cases that align with business objectives and support progression toward the target state. ... The Agentic AI Operating Model defines how AI systems, people, and processes work together to deliver value. It focuses on integrating AI into the organization’s core operations, ensuring that AI agents operate seamlessly within new and existing workflows and alongside human teams.



Quote for the day:

"No person can be a great leader unless he takes genuine joy in the successes of those under him." -- W. A. Nance

Daily Tech Digest - January 13, 2025

Artificial intelligence is optimising the entire M&A lifecycle by providing data-driven insights at every stage to enable informed decisions. Companies considering a merger or acquisition can use AI to understand market trends, performance of past deals, and other events of relevance to decide the way forward. On the potential candidates, big data, analytics and AI algorithms help process vast corporate information from a variety of sources – financial statements, analyst briefings, media reports, and more– to identify acquisition targets meeting their requirements. AI augment the experts in due diligence performing complex financial modelling or reviewing extensive legal documents, conduct risk analysis with higher accuracy at a fraction of the time, compared to existing methods. ... For the legacy enterprise system, at times replacing with a cloud-based solution, organisations can become operational within six to fourteen months, depending on size, which is much faster than the time taken in a traditional on-premise scenario. ... Differences in the merging companies’ technology architectures, tools and configurations, make it extremely challenging to ascertain M&A security posture accurately, completely, and on time, even if the organisations are already on the same cloud.


Time for a change: Elevating developers’ security skills

With detection and remediation tools trivializing code security in the same environments they trained with, it’s not unreasonable to think that junior engineers could maintain the ability to perform this basic task as well as maintain an understanding of the risks and consequences of the vulnerabilities they create as they draft code. For mid-level engineers, given the increased security proficiency earlier in their careers, it can now be expected that it’s their responsibility to necessitate code security with their engineers, before it is even reviewed by senior developers. ... For this effort, developers get a pretty substantial boost to their skill set with this deepened security knowledge, which can be very valuable given the current state of affairs for hiring cybersecurity professionals with a dearth of talent available, growing backlogs, and increasing cybersecurity risks in number and scope. Most importantly, they can achieve it without sacrificing productivity – detecting and remediating vulnerabilities can be done as easily as spellcheck finds spelling errors, and training can be short and tailored to what they’re working on, all within the integrated development environment (IDE) they work in every day. ... In addition, organizations can finally achieve the vision of true shift-left by integrating security into every level of the SDLC and adopt the culture of security they’ve rightly been clamoring for.


How Your Digital Footprint Fuels Cyberattacks — and What to Do About It

If you are like most of us, you have been using digital services for years not realizing that you have been giving hackers access to the details of your personal life. On social media, we voluntarily share PII about who we are and where we are, using the location check-in features. ... Reducing your digital footprint doesn’t have to mean going off the grid. Here are some practical steps you can take — Use separate emails for different accounts: Don’t rely on one email for everything. This minimizes the damage if one account is hacked — it won’t lead hackers to all your other services. Review privacy settings regularly: Many apps have default settings that overshare your information. For instance, on apps like Strava or Telegram, you can turn off location tracking and limit who can contact you or add you to conversations. A quick check of these settings can significantly reduce your exposure. Avoid saving passwords in web browsers: Browsers prioritize convenience, not security. Instead, use a password manager. These tools securely store your passwords and can generate strong, unique ones for each account. This reduces the risk of malware or phishing attacks stealing your credentials directly from your browser. Think before you post: Share less on social media, especially in real time. This will make you harder to track and target.


What is career catfishing, the Gen Z strategy to irk ghosting corporates?

After slogging through the exhausting process of job hunting — submitting countless applications, enduring endless rounds of interviews, and anxiously waiting for updates from unresponsive hiring managers — Gen Z workers have found a way to reclaim the balance of power. The rising trend, dubbed “career catfishing,” involves Gen Zs (those aged 27 and under) accepting job offers only to never show up on their first day. According to a survey by CV Genius, which polled 1,000 UK employees across generations, approximately 34 per cent of Zoomers admitted to engaging in career catfishing. ... Gen Z alone cannot shoulder the blame for the rise of such behaviours. Office ghosting — where one party cuts off communication without notice — is now a common phenomenon. ... Managers and owners identified entitlement, motivation, lack of effort, and productivity as reasons for terminating Gen Z employees. Some even referred to them as the snowflake generation and claimed they were too easily offended, which further justified their dismissal. The practice of career catfishing could further reinforce these stereotypes, making it even harder for young professionals to build trust with potential employers.


The next AI wave — agents — should come with warning labels

AI agents that use unclean data can introduce errors, inconsistencies, or missing values that make it difficult for the model to make accurate predictions or decisions. If the dataset has missing values for certain features, for instance, the model might incorrectly assume relationships or fail to generalize well to new data. An agent could also draw data from individuals without consent or use data that’s not anonymized properly, potentially exposing personally identifiable information. Large datasets with missing or poorly formatted data can also slow model training and cause it to consume more resources, making it difficult to scale the system. In addition, while AI agents must also comply with the European Union’s AI Act and similar regulations, innovation will quickly outpace those rules. Businesses must not only ensure compliance but also manage various risks, such as misrepresentation, policy overrides, misinterpretation, and unexpected behavior. “These risks will influence AI adoption, as companies must assess their risk tolerance and invest in proper monitoring and oversight,” according to a Forrester Research report — “The State Of AI Agents” — published in October. 


Euro-cloud Anexia moves 12,000 VMs off VMware to homebrew KVM platform

“We used to pay for VMware software one month in arrears,” he said. “With Broadcom we had to pay a year in advance with a two-year contract.” That arrangement, the CEO said, would have created extreme stress on company cashflow. “We would not be able to compete with the market,” he said. “We had customers on contracts, and they would not pay for a price increase.” Windbichler considered legal action, but felt the fight would have been slow and expensive. Anexia therefore resolved to migrate, a choice made easier by its ownership of another hosting business called Netcup that ran on a KVM-based platform. Another factor in the company’s favour was that it disguised the fact it ran VMware with an abstraction layer it called “Anexia Engine” that meant customers never saw Virtzilla’s wares and instead worked in a different interface to manage their VM fleets. ... The CEO thinks more companies will move from VMware. “I do not believe Broadcom will be successful,” he told The Register. “They lost all the trust. I have talked to so many VMware customers and they say they cannot work with a company like that.” Regulators are also interested in Broadcom’s practices, he said.


Preparing for AI regulation: The EU AI Act

Among the uses of AI that are banned under Article 5 are AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques. Article 5 also prohibits the use of AI systems that exploit any of the vulnerabilities of a person or a specific group of people due to their age, disability, or a specific social or economic situation. Systems that analyse social behaviours and then use this information in a detrimental way are also prohibited under Article 5 if their use goes beyond the original intent of the data collection. Other areas covered by Article 5 include the use of AI systems in law enforcement and biometrics. Industry observers describe the act as a “risk-based” approach to regulating artificial intelligence. ... Organisations operating in the EU will need to take into account CSRD. Given the power-hungry nature of machine learning and AI inference, the extent to which AI is used may well be influenced by such regulations going forward. While it builds on existing regulations, as Mélanie Gornet and Winston Maxwell note in the Hal Open Science paper The European approach to regulating AI through technical standards, the AI Act takes a different route from these. Their observation is that the EU AI Act draws inspiration from European product safety rules.


Enterprise Data Architecture: A Decade of Transformation and Innovation

Privacy and compliance drive architectural decisions. The One Identity Graph we developed manages complex customer relationships while ensuring CCPA and GDPR compliance. This graph-based solution has prevented data breaches and reduced regulatory risks by implementing automated data lineage tracking, consent management, and real-time data masking. These features reinforce customer trust through transparent data handling and granular access controls. The business impact proves substantial. The platform’s real-time fraud detection analyzes transaction patterns across multiple channels, preventing fraudulent activities before completion. It optimizes inventory dynamically across thousands of locations by simultaneously processing point-of-sale data, supply chain updates, and external market factors. Supply chain disruptions trigger immediate alerts through a sophisticated event correlation engine, enabling preventive action before customer impact. Edge computing represents the next frontier. Processing data closer to its source minimizes latency, critical for IoT applications and real-time decisions. Our implementation reduces data transfer costs by 40% while improving response times for customer-facing applications. 


AI is set to transform education — what enterprise leaders can learn from this development

While AI tools show immense promise in addressing resource constraints, their adoption raises broader questions about the role of human connection in learning. Which brings us back to Unbound Academy. Students will spend two hours online each school morning working through AI-driven lessons in math, reading, and science. Tools like Khanmigo and IXL will personalize the instruction and analyze progress, adjusting the difficulty and content in real-time to optimize learning outcomes. The Charter application asserts that “this ensures that each student is consistently challenged at their optimal level, preventing boredom or frustration.” Unbound Academy’s model significantly reduces the role of human teachers. Instead, human “guides” provide emotional support and motivation while also leading workshops on life skills. What will students lose by spending most of their learning time with AI instead of human instructors, and how might this model reshape the teaching profession? The Unbound Academy model is already used in several private schools and the results they have obtained are used to substantiate the advantages it claims. ... For any of this to happen, the industry needs action that matches the rhetoric.


6 ways continuous learning can advance your career

Joys said thinking critically is about learning how a new idea or innovation might be translated into the current organizational context. "At the end of the day, the company is writing a paycheck for you," he said. "Think about how new stuff provides business value." Joys said professionals also need to ensure the benefits of the things they introduce through their learning processes are tracked and traced. "That's about measuring those efforts to ensure you can say, 'Here's a new piece of technology. Here's how we'll measure how this technology lines up with our corporate strategy and vision.'" ... Worsley told ZDNET he likes to learn on the job rather than acquire new knowledge in the classroom. "I'm not a bookish person. I don't go out and read. I recognize that I need to learn specific things because I've got a problem to solve," he said. "I'll learn about it, get the right people talking, and get the solutions underway. Tell me something's impossible and I'll tell you it's not." ... Keith Woolley, chief digital and information officer at the University of Bristol, said the great thing about his job is that it's like a hobby. "I'm naturally interested in what I do. So, I read things around me without realizing I'm consuming other information," he said. "If you're excited about what you do, learning comes naturally because it's a genuine interest. Then learning happens when you don't expect it."



Quote for the day:

"Doing what you love is the cornerstone of having abundance in your life." -- Wayne Dyer