Daily Tech Digest - September 02, 2024

AI Demands More Than Just Technical Skills From Developers

Unlike in the past, when developers took instructions from a team lead and executed tasks as individual contributors, now they’re outsourcing problem-solving and code generation to AI tools and models. By partnering with GenAI to solve complex problems, developers who were once individual contributors are now becoming team leads in their own right. This new workflow requires developers to elevate their critical-thinking skills and empathy for end-users. No longer can they afford to operate with a superficial understanding of the task at hand. Now, it’s paramount that developers understand the why that is driving their initiative so that they can lead their AI counterparts to the most desirable outcomes. ... Developers are now co-creating IP. Who owns the IP? Does the prompt engineer? Does the GenAI tool? If developers write code with a certain tool, do they own that code? In an industry where tool sets are moving so quickly, it varies based on what tool you’re using, what version of the tool, and what different tools within certain vendors even have different rules. Intellectual property rights are evolving.


Embracing Neurodiversity in IT Workplace to Bridge Talent Gaps

To accommodate neurodiversity effectively, organizations must adopt a multifaceted approach. This includes providing tailored support and resources to neurodiverse employees, such as flexible work arrangements, assistive technologies, and specialized training programs. Additionally, fostering open communication and creating a supportive network of colleagues and mentors can help neurodiverse individuals feel valued and empowered to contribute their unique insights and perspectives. ... The first step, according to Leantime CEO and co-founder Gloria Folaron, is to create a cultural expectation of self-awareness — from leadership to human resources. "The self-awareness can extend across any biases you might have, relationships, or negative experiences or reactions that exist inside. It's a self-checking mechanism," she said. The second benefit of this is that, for many neurodivergent individuals, they have not been well-supported in the past — they've been forced to create their own systems to fit into more traditional work environments. By promoting even employee-level self-awareness, they become empowered to start thinking about their own needs.


Ransomware recovery: 8 steps to successfully restore from backup

Use either physical write-once-read-many (WORM) technology or virtual equivalents that allow data to be written but not changed. This does increase the cost of backups since it requires substantially more storage. Some backup technologies only save changed and updated files or use other deduplication technology to keep from having multiple copies of the same thing in the archive. ... In addition to keeping the backup files themselves safe from attackers, companies should also ensure that their data catalogs are safe. “Most of the sophisticated ransomware attacks target the backup catalog and not the actual backup media, the backup tapes or disks, as most people think,” says Amr Ahmed, EY America’s infrastructure and service resiliency leader. This catalog contains all the metadata for the backups, the index, the bar codes of the tapes, the full paths to data content on disks, and so on. “Your backup media will be unusable without the catalog,” Ahmed says. Restoring without one would be extremely hard or impractical. Enterprises need to ensure that they have in place a backup solution that includes protections for the backup catalog, such as an air gap.


Complying with PCI DSS requirements by 2025

Perhaps one of the most significant changes in terms of preventing e-commerce fraud is the requirement to deploy change-and-tamper-detection mechanisms to alert for unauthorized modifications to the HTTP headers and the contents of payment pages as received by the consumer browser (11.6.1). Most e-commerce-related cardholder data (CHD) theft comes from the abuse of JavaScript used within online stores (otherwise known as web-based skimming). Recent research has shown that most website payment pages have 100 different scripts, some of which come from the merchant itself and some from third parties, and any one of these scripts can potentially be altered to harvest cardholder data. Equally, this could be the payment page of a payment service provider (PSP) which a merchant redirects to, or uses a PSP generated inline frame (iframe), making this an issue that is also relevant to PSPs. The ideal scenario is to reduce this risk by knowing what is in use, what is authorized and has not been altered, which is the principle aim of requirement 6.4.3. This mandates the inventory of scripts, their authorization, evidence that they are necessary and have been validated.


Inside CISA's Unprecedented Election Security Mission

Despite ongoing efforts by foreign adversaries to influence U.S. elections, attempts to subvert the vote have been largely unsuccessful in past elections. CISA's continued expansion of advanced threat detection and response strategies in 2016 and 2020 played a significant role in thwarting attempts by Russia and others to compromise the integrity of the electoral process. The agency has recently issued warnings about "increasingly aggressive Iranian activity during this election cycle," including reported activities to compromise former President Donald Trump's campaign. The Department of Homeland Security designated election infrastructure as a subset of the government facilities sector in 2017, further recognizing the vast networks of voter registration databases, information technology systems, polling places and voting systems as critical infrastructure. ... The agency over the last six years has rolled out a wide range of no-cost voluntary services and resources aimed at reducing risks to election infrastructure, including vulnerability scanning, physical security assessments and supporting the nationwide adoption of .gov domains, which experts say enhance trust by ensuring that election information is verified and comes from official, credible sources.


The Gen Z Guide to Getting Ahead at Work

As a young person entering the workplace with new ideas and fresh eyes and perspectives, you have unique value, experts said. Don't be shy to share your thoughts. You might know something others don't. That could look like sharing tools or shortcuts you know within apps, ideas or stories about how you've solved problems in the past, Paaras said. You might have valuable experience related to a particular topic or insight into how other people your age see things. Or you might be able to spot the inefficiency or error of how things are regularly done. "You're seeing things for the first time, and you can highlight that," Abrahams said. "Focus on the value you bring." ... Set time aside for chatting, by video or in person, with your colleagues and supervisor. Building good relationships can help foster people's trust and willingness to collaborate with you. It also could be a differentiator in your career advancement. "Your presence needs to be felt by others," Wilk said. Seek out one-on-one meetings and casual conversations. Be ready with thoughts, questions and goals for the conversation, Wilk said. When in doubt, remember people love to talk about themselves, she added. Ask them about their career or experience on the job.


Unified Data: The Missing Piece to the AI Puzzle

“A unified data strategy can significantly reduce the time data scientists spend on accessing, re-formatting, or creating data, thereby improving their effectiveness in developing AI models,” Francis says. Yaad Oren, managing director of SAP Labs US and global head of SAP BTP innovation, explains that incorporating AI across an organization is not possible without trusted and governed data. “A unified data strategy simplifies the data landscape, maintains data context and ensures accurate training of AI models,” he says. This leads to more effective AI deployments and allows customers to harness data to drive deeper insights, faster growth, and more efficiency. “A unified date architecture is crucial for creating a holistic view of business operations and avoiding the ramifications of flawed AI,” he adds. By bringing together disparate data from across the business, a data architecture ensures data context is kept intact, providing a picture of how the data was generated, where it resides, when it was created, and who it relates to. “A strategy that incorporates a data architecture empowers users to access and use data in real time, creating a single source of truth for decision making, and automating data management processes,” Oren explains.


The Next Business Differentiator: 3 Trends Defining The GenAI Market

Different industries have distinct needs and like with cloud, standardized or general GenAI models and services can’t support the specialized requirements of specific industries. This is especially true for regulated industries that have stringent governance, risk and compliance standards — industry or domain-specific GenAI models will help organizations comply with regulations and compliance standards, ensuring data security and ethical considerations are adhered to. ... The main reason for prioritizing responsible AI is to mitigate bias. Mitigating bias is fundamental in delivering GenAI solutions that have true market applicability and relevance. Ultimately, bias comes from three areas; algorithms, data and humans. Bias from AI algorithms has plummeted exponentially in the last decade. Today, algorithms are mostly trustworthy and the biggest source of bias in AI comes from data and humans. When it comes to data, bias exists because of a lack of quality and variety, as well as often incomplete datasets used to train the algorithm. With humans, there is an inherent lack of trust when it comes to AI, whether because of reported threats to people’s livelihoods or due to AI hallucinating certain information.


Miniaturized brain-machine interface processes neural signals in real time

The MiBMI's small size and low power are key features, making the system suitable for implantable applications. Its minimal invasiveness ensures safety and practicality for use in clinical and real-life settings. It is also a fully integrated system, meaning that the recording and processing are done on two extremely small chips with a total area of 8mm2. This is the latest in a new class of low-power BMI devices developed at Mahsa Shoaran's Integrated Neurotechnologies Laboratory (INL) at EPFL's IEM and Neuro X institutes. "MiBMI allows us to convert intricate neural activity into readable text with high accuracy and low power consumption. This advancement brings us closer to practical, implantable solutions that can significantly enhance communication abilities for individuals with severe motor impairments," says Shoaran. Brain-to-text conversion involves decoding neural signals generated when a person imagines writing letters or words. In this process, electrodes implanted in the brain record neural activity associated with the motor actions of handwriting. The MiBMI chipset then processes these signals in real time, translating the brain's intended hand movements into corresponding digital text. 


From Transparency to the Perils of Oversharing

While openness fosters collaboration and trust, oversharing can inadvertently lead to micromanagement, misinterpretation, and a loss of trust, undermining the foundations of a healthy team dynamic. ... Transparency without trust can create a blame culture where team members feel exposed to criticism for every minor mistake. This effect can result in individuals trying to cover their tracks or avoid taking risks, undermining the very principles of Agile. Decision paralysis: When too much transparency leads to stakeholders or managers second-guessing every team decision, it can create decision paralysis. The team may feel that every move is under a microscope, leading them to slow down or become overly cautious, eroding the trust that they can make decisions independently. ... It’s not just the team that needs to manage transparency effectively; stakeholders also need guidance on interpreting the information they receive. Educating stakeholders on Agile practices and the purpose of various metrics can prevent misinterpretation and unnecessary interference. In other words, run workshops for stakeholders on interpreting data and information from your team.



Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer

Daily Tech Digest - September 01, 2024

Since cyber risk can’t be eliminated, the question that must be answered is: Can cyber risk at least be managed in a cost-effective manner? The answer is an emphatic yes! ... Identify the sources of cyber risk. These sources can be broken down into various categories. More specifically, there are internal and external threats, as well as potential vulnerabilities that are the basis for cyber risk. Identifying these threats and vulnerabilities is not only a logical place to start the process of managing an organization’s cyber risk, it also will help to frame an approach for addressing an organization’s cyber risk. Estimate the likelihood (i.e., probability) that your organization will experience a cyber breach. Of course, any single point estimate of the probability of a cyber breach is just that—an estimate of one possibility from a probability distribution. Thus, rather than estimating a single probability, a range of probabilities could be considered. Estimate the maximum cost to an organization if a cyber breach occurs. Here again, a point estimate of the maximum cost resulting from a cyber-attack is just that—an estimate of one possible cost. Thus, rather than estimating a single cost, a range of costs could be considered.


How AI is Revolutionizing Prosthetics to Refine Movement

AI prosthetics technology is advancing on several fronts. Researchers at the UK's University of Southampton and Switzerland's EPFL University have, for instance, developed a sensor that allows prosthetic limbs to sense wetness and temperature changes. "This capability helps users adjust their grip on slippery objects, such as wet glasses, enhancing manual dexterity and making the prosthetic feel more like a natural part of their body," Torrang says. Multi-texture surface recognition is another area of important research. Advanced AI algorithms, such as neural networks, can be used to process data from liquid metal sensors embedded in prosthetic hands. "These sensors can distinguish between different textures, enabling users to feel various surfaces," Torrang says. "For example, researchers have developed a system that can accurately detect and differentiate between ten different textures, helping users perform tasks that require precise touch." Natural sensory feedback research is also attracting attention. AI can be used to provide natural sensory feedback through biomimetic stimulation, which mimics the natural signals of the nervous system.


From Chaos to Clarity: CTO’s Guide to Successful Software {Code} Refactoring

As software grows, code can become overly complicated and difficult to understand, making modifications and extensions challenging. Refactoring simplifies and clarifies the code, enhancing its readability and maintainability.Signs of poor performance If software performance degrades or fails to meet efficiency benchmarks, refactoring can optimize it and improve its speed and responsiveness. Migration to newer technologies and libraries When migrating legacy systems to newer technologies and libraries, code refactoring ensures smooth integration and compatibility, preventing potential issues down the line.Frequent bugs Frequent bugs and system crashes often indicate a messy codebase that requires cleanup. If your team spends more time tracking down bugs than developing new features, code refactoring can improve stability and reliability.Onboarding team of new developers Onboarding new developers is another instance where refactoring is beneficial. Standardizing the code base ensures new team members can understand and work with it more effectively.Code issues


How to Win the War Against Bad Master Data

The most immediate chance to make a difference lies within your existing dataset. Take the initiative to compare your supplier and customer master data with reliable external sources, such as government databases, regulatory lists, and other trusted entities, to pinpoint discrepancies and omissions. Consider this approach as a form of “data governance as a service,” as a shortcut to data quality where you can rely on comparison with the authoritative data sources to make sure fields are the right length, in the right format, and, even more important, accurate. This task may require significant effort (unless automated master data validation and enrichment is employed), but it can provide an immediate ROI. Each corrected error and updated entry contributes to greater compliance, lower risk and enhanced operational efficiency within the organization. However, many companies lack a consistent process for cleaning data, and even among those with a process in place, the scope and frequency of data cleansing is often insufficient. The best data quality comes from continuous automated cleansing and enrichment.


3 Ways to Boost Cybersecurity Defenses With Limited Resources

Assume-breach accepts that breaches are inevitable, shifting the focus from preventing all breaches to minimizing the impact of a breach through security measures, protocols and tools that are designed with the assumption that an attacker may have already compromised parts of the network. Paired with the assume-breach mindset, these security measures, protocols and tools focus on protecting data, detecting unusual behavior and responding quickly to potential threats. Just as cars are equipped with seatbelts and airbags to reduce the fallout of a crash, assume-breach encourages organizations to put proactive measures in place to reduce the impact and damage when the worst occurs. ... In the event a cyber attack does occur, having a well-tested and resilient plan in place is key to minimize impacts. As the entire organization participates in these practices and trainings, leaders can focus on implementing assume-breach security measures, protocols and tools. These measures should include enhancing real-time visibility, identifying vulnerabilities, blocking known ransomware points and strategic asset segmentation. 


The Future of LLMs Is in Your Pocket

The first reason is that, due to the cost of GPUs, generative AI has broken the near-zero marginal cost model that SaaS has enjoyed. Today, anything bundling generative AI commands a high seat price simply to make the product economically viable. This detachment from underlying value is consequential for many products that can’t price optimally to maximize revenue. In practice, some products are constrained by a pricing floor (e.g., it is impossible to discount 50% to 10x the volume), and some features can’t be launched because the upsell doesn’t pay for the inference cost ... The second reason is that the user experience with remote models could be better: generative AI enables useful new features, but they often come at the expense of a worse experience. Applications that didn’t depend on an internet connection (e.g., photo editors) now require it. Remote inference introduces additional friction, such as latency. Local models remove the dependency on an internet connection. The third reason has to do with how models handle user data. This plays out in two dimensions. First, serious concerns have been about sharing growing amounts of private information with AI systems.


GenOps: learning from the world of microservices and traditional DevOps

How do the operational requirements of a generative AI application differ from other applications? With traditional applications, the unit of operationalisation is the microservice. A discrete, functional unit of code, packaged up into a container and deployed into a container-native runtime such as kubernetes. For generative AI applications, the comparative unit is the generative AI agent: also a discrete, functional unit of code defined to handle a specific task, but with some additional constituent components that make it more than ‘just’ a microservice ... The Reasoning Loop is essentially the full scope of a microservice, and the model and Tool definitions are its additional powers that make it into something more. Importantly, although the Reasoning Loop logic is just code and therefore deterministic in nature, it is driven by the responses from non-deterministic AI models, and this non-deterministic nature is what provides the need for the Tool, as the agent ‘chooses for itself’ which external service should be used to fulfill a task. A fully deterministic microservice has no need for this ‘cookbook’ of Tools for it to select from: Its calls to external services are pre-determined and hard coded into the Reasoning Loop.


Saudi Arabia strengthening cyber resilience through skills development

Currently, four of the top 10 fastest-growing job roles in Saudi Arabia fall within the fields of cybersecurity, data analysis, and software development. As the demand for such expertise far outstrips supply, the government, industry, and academia must collaborate to develop and expand pathways to nurture talent in this field. Enhanced curricula and specialized programs will help upskill students in data protection, while partnerships with global tech companies can facilitate knowledge transfer and provide access to cutting-edge technologies and methodologies in public and private sector organizations. The Saudi government’s investments in initiatives to enhance digital skills, including a $1.2 billion plan to train 100,000 youths by 2030 in critical fields like digital security, are a crucial step in this direction. Saudi Arabia today outpaces the global average in cybersecurity trends, with 3.1 percent compared to the global average of 2.5 percent. An overwhelming 79 percent of Saudi employees anticipate substantial shifts in their work dynamics due to AI advancements. This is reflected in the rise of new learners in the Kingdom who are building skill proficiencies and acquiring new digital skills to boost their economic mobility.


How Financial Firms Can Build Better Data Strategies

For financial organizations, data strategies are often driven by CISOs and tend to focus on data protection and security. This enables regulatory and operational compliance by ensuring the right people can access the right data at the right time while still aligning to the corporate security and risk stance. But this now comes within the context of the emergence of artificial intelligence and the growing sense that firms need to leverage it to gain a competitive edge. For example, loan data can be made more valuable with AI-driven analytics. Or banks can use AI tools to identify patterns related to fraud or compliance challenges to help them avoid potential regulatory pitfalls. ... New tools often seem like the answer to every data challenge, but the right tools build on what’s already in place. To ensure that new solutions deliver a strategic advantage, leaders must ask simple questions: What’s the business need? Why am I pursuing this approach or tool? This requires an honest assessment of current data conditions, business needs and coworker skill sets. While many businesses are fully compliant and meet regulations, their data may not be highly active or in great shape. Identifying issues lets leaders define business needs.


Achieving digital transformation in fintech one step at a time

The fintech industry continues to experience unprecedented growth year after year. According to the Statista survey, there are more than 3 billion customers in this niche worldwide, and this number is expected to have grown to 4.8 billion by 2028. Advanced financial technologies are gradually replacing traditional services. For instance, one of the latest researches by the American Bankers Association has shown that 71% of users prefer managing their financial affairs online (48% of them — with the help of mobile apps), while only 9% of clients would rather go to a physical bank branch. In addition, more and more people around the world are favoring cryptocurrencies over traditional currencies as payment and investment tools. It is proven by the Forbes statistics, which state that the capitalization of the crypto market has exceeded 2.5 trillion dollars. As is well-known, demand creates supply. Such a keen interest in modern fintech instruments and services from users from all over the world generates a constant growth of supply in this sphere. Every year, plenty of new companies emerge, while the existing ones introduce numerous innovations to keep their regular customers and attract new ones.



Quote for the day:

"True greatness, true leadership, is achieved not by reducing men to one's service but in giving oneself in selfless service to them." -- J. Oswald Sanders

Daily Tech Digest - August 31, 2024

CTO to CTPO: Navigating the Dual Role in Tech Leadership

A competent CPTO can streamline processes, reduce the risk of misalignment, and offer a clear vision for both product and technology initiatives. This approach can also be cost-effective, as executive roles come with high salaries and significant demands. Combining these roles simplifies the organizational structure, providing a single point of contact for research and development. This works well in environments where product and technology are closely integrated and mature in the product and technology systems. In my role, most of my day-to-day activities are focused on the product. I’m very conscious that I don’t have a counterpart to challenge my thinking, so I spend a lot of time with senior business stakeholders to ensure the debates and discussions occur. I also encourage this in my leadership team to ensure that technology and product leaders are rigorous in their thinking and decision-making. Ultimately, deciding to have one or two roles for product and technology depends on a company’s specific needs, maturity, and strategic priorities. For some, clarity and focus come from having both a CPO and a CTO. For others, the simplicity and unified vision that comes from a single leader makes more sense.


How quantum computing could revolutionise (and disrupt) our digital world

Everything that is encrypted today could potentially be laid bare. Banking, commerce, and personal communications—all the pillars of our digital world—could be exposed, leading to consequences we’ve never encountered. Thankfully, Q-Day is estimated to be five to ten years away, mainly because building a stable quantum computer is fiendishly difficult. The processors need to be cooled to near absolute zero, among other technical challenges. But make no mistake—it’s coming. Sergio stressed that businesses and countries need to prepare now. Already, some groups are harvesting encrypted data with the intention of decrypting it when quantum computing capabilities mature. Much like the Y2K bug, Q-Day requires extensive preparation. This August, the National Institute of Standards and Technology (NIST) released the first set of post-quantum encryption standards designed to withstand quantum attacks. Similarly, the UK’s National Cyber Security Centre (NCSC) advises that migrating to post-quantum cryptography (PQC) is a complex, multi-year effort that requires immediate action.


Transparency is often lacking in datasets used to train large language models

Researchers often use a technique called fine-tuning to improve the capabilities of a large language model that will be deployed for a specific task, like question-answering. For finetuning, they carefully build curated datasets designed to boost a model’s performance for this one task. The MIT researchers focused on these fine-tuning datasets, which are often developed by researchers, academic organizations, or companies and licensed for specific uses. When crowdsourced platforms aggregate such datasets into larger collections for practitioners to use for fine-tuning, some of that original license information is often left behind. “These licenses ought to matter, and they should be enforceable,” Mahari says. For instance, if the licensing terms of a dataset are wrong or missing, someone could spend a great deal of money and time developing a model they might be forced to take down later because some training data contained private information. “People can end up training models where they don’t even understand the capabilities, concerns, or risk of those models, which ultimately stem from the data,” Longpre adds.


Cyber Insurance: A Few Security Technologies, a Big Difference in Premiums

Finding the right security technologies for the business is increasingly important, because ransomware incidents have accelerated over the past few years, says Jason Rebholz, CISO at Corvus Insurance, a cyber insurer. Attackers posted the names of at least 1,248 victims to leak sites in the second quarter of 2024, the highest quarterly volume to date, according the firm. ... "We take VPNs very seriously in how we price [our policies] and what recommendations we give to our companies ... and this is mostly related to ransomware," Itskovich says. For those reasons, businesses should take a look at their VPN security and email security, if they want to better secure their environments and, by extension, reduce their policy costs. Because an attacker will eventually find a way to compromise most companies, having a way to detect and respond to threats is vitally important, making managed detection and response (MDR) another technology that will eventually pay for itself, he says. ... For smaller companies, email security, cybersecurity-awareness training, and multi-factor authentication are critical, says Matthieu Chan Tsin, vice president of cybersecurity services for Cowbell. 


Cybersecurity for Lawyers: Open-Source Software Supply Chain Attacks

A supply chain attack co-opts the trust in the open-source development model to place malicious code inside the victim’s network or computer systems. Essentially, the attacker inserts malicious code, like a foodborne virus, into the software during its development process, positioning the malicious code to be unintentionally installed by the end user installing the software within their network. Any organization using the affected project has unwittingly invited the malicious code within its walls. Malicious code may already reside within a newly adopted OSS project, or it could be delivered via an updated version of a trusted project. The difference between an OSS supply chain attack and a traditional supply chain attack (e.g., inserting malware into proprietary software) is that the organization using OSS has access to its entire code at the outset and throughout its use (and can therefore examine it for vulnerabilities or otherwise have greater insight into how it functions when used maliciously). While some organizations may have the resources and wherewithal to leverage this as a security advantage, many will not.


A Measure of Motive: How Attackers Weaponize Digital Analytics Tools

IP geolocation utilities can be used legitimately by advertisers and marketers to gauge the geo-dispersed impact of advertising reach and the effectiveness of marketing funnels (albeit with varying levels of granularity and data availability). However, Mandiant has observed IP geolocation utilities used by attackers. Some real-world attack patterns that Mandiant has observed leveraging IP geolocation utilities include:Malware payloads connecting to geolocation services for infection tracking purposes upon successful host compromise, such as with the Kraken Ransomware. This allows attackers a window into how fast and how far their campaign is spreading. Malware conditionally performing malicious actions based on IP geolocation data. This functionality allows attackers a level of control around their window of vulnerability and ensures they do not engage in “friendly fire” if their motivations are geo-political in nature, such as indiscriminate nation-state targeting by hacktivists. An example of this technique can be seen in the case of the TURKEYDROP variant of the Adwind malware, which attempts to surgically target systems located in Turkey.


AI development and agile don't mix well

Interestingly, several AI specialists see formal agile software development practices as a roadblock to successful AI. ... "While the agile software movement never intended to develop rigid processes -- one of its primary tenets is that individuals and interactions are much more important than processes and tools -- many organizations require their engineering teams to universally follow the same agile processes." ... The report suggested: "Stakeholders don't like it when you say, 'it's taking longer than expected; I'll get back to you in two weeks.' They are curious. Open communication builds trust between the business stakeholders and the technical team and increases the likelihood that the project will ultimately be successful."Therefore, AI developers must ensure technical staff understand the project purpose and domain context: "Misunderstandings and miscommunications about the intent and purpose of the project are the most common reasons for AI project failure. Ensuring effective interactions between the technologists and the business experts can be the difference between success and failure for an AI project."


A quantum neural network can see optical illusions like humans do. Could it be the future of AI?

When we see an optical illusion with two possible interpretations (like the ambiguous cube or the vase and faces), researchers believe we temporarily hold both interpretations at the same time, until our brains decide which picture should be seen. This situation resembles the quantum-mechanical thought experiment of Schrödinger’s cat. This famous scenario describes a cat in a box whose life depends on the decay of a quantum particle. According to quantum mechanics, the particle can be in two different states at the same time until we observe it – and so the cat can likewise simultaneously be alive and dead. I trained my quantum-tunnelling neural network to recognise the Necker cube and Rubin’s vase illusions. When faced with the illusion as an input, it produced an output of one or the other of the two interpretations. Over time, which interpretation it chose oscillated back and forth. Traditional neural networks also produce this behaviour, but in addition my network produced some ambiguous results hovering between the two certain outputs – much like our own brains can hold both interpretations together before settling on one.


How To Channel Anger As An Emotional Intelligence Strategy

If you want to use anger in a constructive way, you first have to break the mental stigma that “Anger is bad.” Anger, like all emotions, is an instinctual response. Rather than label this response as good or bad, it’s more useful to think of it simply as data. Your emotions offer you data, and you can harness that data in a number of ways. ... The second half of the battle is to learn to use your anger with intent. To do so, you have to understand the potential for anger to hijack your behavior. “[Anger] can also be a negative,” Scherzer warned in his same interview. “It has been [for me] in the past, where you almost get too much adrenaline, too much emotion, and you aren’t thinking clearly.” In other words, Scherzer doesn’t just dial in anger and then see what happens. He channels it with purpose. Even though he may appear intense or even hotheaded, his intent is strong. And that intent is what enables him to harness his anger in a constructive way. ... Since this is a more advanced emotional intelligence strategy, there are a couple of things you should keep top of mind. First, if you’re the kind of person whose anger frequently gets in your way, you should likely focus your time on management strategies, not this one. Second, you should start by applying this strategy in a lower-stakes situation.


How to Improve Your Leadership Style With Cohort-Based Leadership Training

Cohort-based learning is rooted in Albert Bandura's social learning theory. Social interaction improves learning because humans are social creatures by nature. Hence, we enjoy learning more from interactive, multimedia methods than passive ones that lack feedback or immediate results. Perspective-taking and mentalizing in cohorts promote empathy and communication skills, while emotional resonance and dialogue deepen understanding for all involved. The accountability that forms in groups encourages commitment and performance. Community-based learning, feedback, emotional support and real-world application ignite individual and collective learning. ... The structured curriculum is designed to cover various aspects of leadership, building upon previous sessions to provide a comprehensive learning journey. Practical tools, measurements and models are provided to apply directly to the work environment. Real-time feedback and consulting during group sessions help participants tackle specific workplace challenges, allowing for continuous learning, application and feedback to support their development.



Quote for the day:

“A bend in the road is not the end of the road unless you fail to make the turn.” -- Helen Keller

Daily Tech Digest - August 30, 2024

Balancing AI Innovation and Tech Debt in the Cloud

While AI presents incredible opportunities for innovation, it also sheds light on the need to reevaluate existing governance awareness and frameworks to include AI-driven development. Historically DORA metrics were introduced to quantify elite engineering organizations based on two critical categories of speed and safety. Speed alone does not indicate elite engineering if the safety aspects are disregarded altogether. AI development cannot be left behind when considering the safety of AI-driven applications. Running AI applications according to data privacy, governance, FinOps and policy standards is critical now more than ever, before this tech debt spirals out of control and data privacy is infringed upon by machines that are no longer in human control. Data is not the only thing at stake, of course. Costs and breakage should also be a consideration. If the CrowdStrike outage from last month has taught us anything it’s that even seemingly simple code changes can bring down entire mission-critical systems at a global scale when not properly released and governed. This involves enforcing rigorous data policies, cost-conscious policies, compliance checks and comprehensive tagging of AI-related resources.


AI and Evolving Legislation in the US and Abroad

The best way to prepare for regulatory changes is to get your house in order. Most crucial is having an AI and data governance structure. This should be part of the overall product development lifecycle so that you’re thinking about how data and AI is being used from the very beginning. Some best practices for governance include: Forming a cross-functional committee to evaluate the strategic use of data and AI products; Ensuring you have experts from different domains working together to design algorithms that produce output that is relevant, useful and compliant; Implementing a risk assessment program to determine what risks are at issue for each use case; Executing an internal and external communication plan to inform about how AI is being used in your company and the safeguards you have in place. AI has become a significant, competitive factor in product development. As businesses develop their AI program, they should continue to abide by responsible and ethical guidelines to help them stay compliant with current and emerging legislation. Companies that follow best practices for responsible use of AI will be well-positioned to navigate current rules and adapt as regulations evolve.


The paradox of chaos engineering

Although chaos engineering offers potential insights into system robustness, enterprises must scrutinize its demands on resources, the risks it introduces, and its alignment with broader strategic goals. Understanding these factors is crucial to deciding whether chaos engineering should be a focal area or a supportive tool within an enterprise’s technological strategy. Each enterprise must determine how closely to follow this technological evolution and how long to wait for their technology provider to offer solutions. ... Chaos engineering offers a proactive defense mechanism against system vulnerabilities, but enterprises must weigh its risks against their strategic goals. Investing heavily in chaos engineering might be justified for some, particularly in sectors where uptime and reliability are crucial. However, others might be better served by focusing on improvements in cybersecurity standards, infrastructure updates, and talent acquisition. Also, what will the cloud providers offer? Many enterprises get into public clouds because they want to shift some of the work to the providers, including reliability engineering. Sometimes, the shared responsibility model is too focused on the desire of the cloud providers rather than their tenants. You may need to step it up, cloud providers.


Generative AI vs large language models: What’s the difference?

While generative AI has become popular for content generation more broadly, LLMs are making a massive impact on the development of chatbots. This allows companies to provide more useful responses to real-time customer queries. However, there are differences in the approach. A basic generative AI chatbot, for example, would answer a question with a set answer taken from a stock of responses upon which it has been trained. Introducing an LLM as part of the chatbot set-up means its response will become much more detailed and reactive and just like the reply has come from a human advisor, instead of from a computer. This is quickly becoming a popular option, with firms such as JP Morgan embracing LLM chatbots to improve internal productivity. Other useful implementations of LLMs are to generate or debug code in software development or to carry out brainstorms or research tasks by tapping into various online sources for suggestions. This ability is made possible by another related AI technology called retrieval augmented generation (RAG), in which LLMs draw on vectorized information outside of its training data to root responses in additional context and improve their accuracy.


Agentic AI: Decisive, operational AI arrives in business

Agentic AI, at its core, is designed to automate a specific function within an organization’s myriad business processes, without human intervention. AI agents can, for example, handle customer service issues, such as offering a refund or replacement, autonomously, and they can identify potential threats on an organization’s network and proactively take preventive measures. ... Cognitive AI agents can also serve as assistants in the healthcare setting by engaging with a patient daily to support mental healthcare treatment, and as student recruiters at universities, says Michelle Zhou, founder of Juji AI agents and an inventor of IBM Watson Personality Insights. The AI recruiter could ask prospective students about their purpose of visit, address their top concerns, infer the students’ academic interests and strengths, and advise them on suitable programs that match their interests, she says. ... The key to getting the most value out of AI agents is getting out of the way, says Jacob Kalvo, co-founder and CEO of Live Proxies, a provider of advanced proxy solutions. “Where agentic AI truly unleashes its power is in the ability to act independently,” he says. 


Protecting E-Commerce Businesses Against Disruptive AI-driven Bot Threats

Bot attacks have long been a thorn in the side of e-commerce platforms. With the growing number of shoppers regularly interacting and sharing their data on retail websites combined with high transaction volumes and a growing attack surface, these online businesses have been a lucrative target for cybercriminal activity. From inventory hoarding, account takeover, and credential stuffing to price scraping and fake account creation, these automated threats have often caused significant damage to e-commerce operations. By using a variety of sophisticated evasion techniques in distributed bot attacks such as rapidly rotating IPs and identities and manipulating HTTP headers to appear as legitimate requests, attackers have been able to evade detection by traditional bot detection tools.  ... With the evolution of Generative AI models and its increasing adoption by bot operators, bot attacks are expected to become even more sophisticated and aggressive in nature. In the future, Gen AI-based bots could be able to independently learn, communicate with other bots, and adapt in real-time to an application’s defensive mechanisms. 


How copilot is revolutionising business process automation and efficiency

Copilot is essential for optimising operations in addition to increasing productivity. Companies frequently struggle with inefficiencies brought on by human error and manual processes. Copilot ensures seamless operations and lowers the possibility of errors by automating these activities. For instance, automation of customer service. According to a survey, 72% of consumers believe that agents should automatically be aware of their personal information and service history. Customer relationship management (CRM) systems can incorporate Copilot to give agents real-time information and recommendations, guaranteeing a customised and effective service experience. The efficiency of customer support operations is further enhanced by intelligent routing of questions and automated responses. ... For example, Copilot can forecast performance, assess market trends, and provide investment recommendations in the financial industry. Deloitte claims that artificial intelligence (AI) can save operating costs in the finance sector by as much as 20%. Copilot’s automated data analysis and accurate recommendation engine help financial organisations remain ahead of the curve and confidently make strategic decisions.


Is your data center earthquake-proof?

Leuce explains that when Colt DCS designs the layout of a data center, it ensures the most critical parts, such as the data halls, electrical rooms, and other ancillary rooms required for business continuity, are placed on the isolation base. Other elements, such as generators, which are often designed to withstand an earthquake, can then be placed directly on the ground. ... A final technique employed by Colt DCS is the use of dampers – hydraulic devices that dissipate the kinetic energy of seismic events and cushion the impact between structures. Having previously deployed lead dampers at its first data center in Inzai, Japan, Colt’s has gone a step further at its most recently built facility in Keihanna, Japan, where it is using a combination of an oil damper made out of naturally laminated rubber plus a friction pendulum system, a type of base isolation that allows you to damp both vertically and horizontally. “The reason why we mix the friction pendulum with the oil damper is because with the oil damper, you can actually control the frequency in the harmonics pulsation of the building, depending on the viscosity of the oil, while the friction pendulum does the job of dampening the energy in both directions, so you bring both technologies together,” Leuce explains.


Digital IDV standards, updated regulation needed to fight sophisticated cybercrime

In the face of rising fraud and technological advancements, there is a growing consensus on the need for innovative approaches to financial security. As argued in a recent Forbes article, the upcoming election season presents an opportunity to rethink the ecosystem that supports financial innovation. In the article, Penny Lee, president and CEO of the Financial Technology Association (FTA), advocates for policies that foster technological advancements while ensuring robust regulatory frameworks to protect consumers from emerging threats. ... Amidst these challenges, the payments industry is experiencing a surge in innovation aimed at combating fraud and enhancing security. Real-time payments and secure digital identity systems are at the forefront of these efforts. The U.S. Payments Forum Summer Market Snapshot highlights a growing interest in real-time payments systems, which enable instant transfer of funds and provide businesses and consumers with immediate access to their money. These systems are designed to improve cash flow management and reduce the risk of fraud through enhanced authentication measures.


Transformer AI Is The Heathcare System's Way Forward

Transformer-based LLMs are adapting quickly to the amount of medical information the NHS deals with per patient and on a daily basis. The size of the ‘context windows’, or input, is expanding to accommodate larger patient files, critical for quick analysis of medical notes and more efficient decision making by clinical teams. Beyond speed, these models serve well for quality of output, which can lead to more optimal patient care. An ‘attention mechanism’ learns how different inputs relate to each other. In a medical context, this can include the interactions of different drugs in a patient’s record. It can find relationships between medicines and certain allergies, predicting the outcome of this interaction on the patient’s health. As more patient records become electronic, the larger training sets will allow LLMs to become more accurate. These AI models can do what takes humans hours of manual effort – sifting through patient notes, interpreting medical records and family history and understanding relationships between previous conditions and treatments. The benefit of having this system in place is that it creates a full, contextual picture of a patient that helps clinical teams make quick decisions about treatment and counsel.



Quote for the day:

"Are you desperate or determined? With desperation comes frustration; With determination comes purpose achievement, and peace." -- James A. Murphy

Daily Tech Digest - August 29, 2024

The human factor in the industrial metaverse

The virtualisation of factories might ensure additional efficiencies, but it has the potential to fundamentally alter the human dynamics within an organisation. With rising reliance on digital tools, it gets challenging to maintain the human aspects of work. ... Just like evolving innovation is crucial, so is organisational culture. Leaders must promote a culture that supports agility, innovation, and continuous learning to ensure success in a virtual factory environment. This can be achieved by being transparent, encouraging experimentation, and recognising and rewarding an employee’s creativity and adaptability. With the rapid evolution of virtual factories employees must undergo comprehensive training that covers both technical and soft skills to adapt to the virtual environment. While practical, hands-on exercises are crucial for real-world application, it’s also important to have continuous learning with ongoing workshops, online training, and cross-training opportunities. To further enhance knowledge sharing, establishing mentorship and peer-learning programs can ensure a smooth transition, fostering a cohesive and productive workforce.


Challenging The Myths of Generative AI

The productivity myth suggests that anything we spend time on is up for automation — that any time we spend can and should be freed up for the sake of having even more time for other activities or pursuits — which can also be automated. The importance and value of thinking about our work and why we do it is waved away as a distraction. The goal of writing, this myth suggests, is filling a page rather than the process of thought that a completed page represents. ... The prompt myth is a technical myth at the heart of the LLM boom. It was a simple but brilliant design stroke: rather than a window where people paste text and allow the LLM to extend it, ChatGPT framed it as a chat window. We’re used to chat boxes, a window that waits for our messages and gets a (previously human) response in return. In truth, users provide words that dictate what we get back. ... Intelligence myths arise from the reliance on metaphors of thinking in building automated systems. These metaphors – learning, understanding, and dreaming – are helpful shorthand. But intelligence myths rely on hazy connections to human psychology. They often conflate AI systems inspired by models of human thought for a capacity to think.


The New Frontiers of Cyber-Warfare: Insights From Black Hat 2024

Corporate sanctions against nations are just one aspect of the broader issue. Moss also spoke about a new kind of trade war, where nation-states are pushing back against big tech companies and their political and economic agendas – along with the agendas of countries where these companies are based. Moss noted that countries are now using digital protectionist policies to wage what he called "a new way to escalate." He cited India's 2020 ban on TikTok, which resulted in China’s ByteDance reportedly facing up to $6 billion in losses. Moss also discussed the phenomenon of “app diplomacy,” where governments dictate to big tech companies like Apple and Google which apps are permitted in their markets. He mentioned the practice of “tech sorting,” where countries try to maintain strict control over foreign tech through redirection, throttling, or direct censorship. ... Shifting from concerns over AI to the emerging weapons of cyber espionage and warfare, Moss, moderating Black Hat’s wrap-up discussion, brought up the growing threat of hardware attacks. He asked Jos Wetzels, partner at Midnight Blue, to discuss the increasing accessibility of electromagnetic (EM) and laser weapons.


5 best practices for running a successful threat-informed defense in cybersecurity

Assuming organizations are doing vulnerability scanning across systems, applications, attack surfaces, cloud infrastructure, etc., they will come up with lists of tens of thousands of vulnerabilities. Even big, well-resourced enterprises can’t remediate this volume of vulnerabilities in a timely fashion, so leading firms depend upon threat intelligence to guide them into fixing those vulnerabilities most likely to be exploited presently or in the near future. ... As previously mentioned, a threat-informed defense involves understanding adversary TTPs, comparing these TTPs to existing defenses, identifying gaps, and then implementing compensating controls. These last steps equate to reviewing existing detection rules, writing new ones, and then testing them all to make sure they detect what they are supposed to. Rather than depending on security tool vendors to develop the right detection rules, leading organizations invest in detection engineering across multiple toolsets such as XDR, email/web security tools, SIEM, cloud security tools, etc. CISOs I spoke with admit that this can be difficult and expensive to implement. 


Let’s Bring H-A-R-M-O-N-Y Back Into Our Tech Tools

The focus of a platform approach is on harmonized experiences: a state of balance, agreement and even pleasant interaction among the various elements and stakeholders involved in development. There needs to be a way to make it easy and enjoyable to build, test and release at the pace of today’s business without the annoying dependencies that bog down developers along the way — on both the application and infrastructure sides. I believe tool stacks and platforms that use a harmony-focused method can even bring the fun back into development. ... Resilience refers to the ability to withstand and recover from failures and disruptions, and you can’t follow a harmonized approach without it. A resilient architecture is designed to handle unexpected challenges — be they spikes in traffic, hardware malfunctions or software bugs — without compromising core functionality. How do you create resiliency? Through running, testing and debugging your code to catch errors early and often. Building a robust testing foundation can look like having a dedicated testing environment and ephemeral testing features. 


Cybersecurity Maturity: A Must-Have on the CISO’s Agenda

The process of maturation in personnel is often reflected in the way these teams are measured. Less mature teams tend to be measured on activity metrics and KPIs around how many tickets are handled and closed, for example. In more mature organisations the focus has shifted towards metrics like team satisfaction and staff retention. This has come through strongly in our research. Last year 61% of cybersecurity professionals surveyed said that the key metric they used to assess the ROI of cybersecurity automation was how well they were managing the team in terms of employee satisfaction and retention – another indication that it is reaching a more mature adoption stage. Organizations with mature cybersecurity approaches understand that tools and processes need to be guided through the maturity path, but that the reason for doing so is to serve the people working with them. The maturity and skillsets of teams should also be reviewed, and members should be given the opportunity to add their own input. What is their experience of the tools and processes in place? Do they trust the outcomes they are getting from AI- and machine learning-powered tools and processes? 


What can my organisation do about DDoS threats?

"Businesses can prevent attacks using managed DDoS protection services or through implementing robust firewalls to filter malicious traffic and deploying load balancers to distribute traffic evenly when under heavy load,” advises James Taylor, associate director, offensive security practice, at S-RM. “Other defences include rate limiting, network segmentation, anomaly detection systems and implementing responsive incident management plans.” But while firewalls and load balancers may stop some of the more basic DDoS attack types, such as SYN floods or fragmented packet attacks, they are unlikely to handle more sophisticated DDoS attacks which mimic legitimate traffic, warns Donny Chong, product and marketing director at DDoS specialist Nexusguard. “Businesses should adopt a more comprehensive approach to DDoS mitigation such as managed services,” he says. “In this setup, the most effective approach is a hybrid one, combining cloud-based mitigation with on-premises hardware which be managed externally by the DDoS specialist provider. It also combines robust DDoS mitigation with the ability to offload traffic to the designated cloud provider as and when needed.”


How Aspiring Software Developers Can Stand Out in a Tight Job Market: 5 FAQs

While technical skills are critical, the ability to listen to clients, understand their problems and translate technical information into simple language is also important. Without reliable soft skills, clients may doubt your ability to address their needs. Employers also want candidates who can collaborate and work effectively in a team setting. This involves taking initiative, having strong written and verbal communication skills and being proactive about sharing status updates. Demonstrate these skills by discussing how you applied them in college extracurriculars or in the classroom as part of group project work, and how you plan to apply them in the workplace. In a highly competitive job market, doing so may set you apart from other candidates who offer similar technical backgrounds. ... Research the company before applying for a role so you're prepared with thoughtful questions for your interview. For example, you might want to ask about the new hire onboarding process, professional development opportunities, company culture or specific questions regarding a project the interviewer has recently worked on.


Bridging the AI Gap: The Crucial Role of Vectors in Advancing Artificial Intelligence

Vector databases have recently emerged into the spotlight as the go-to method for capturing the semantic essence of various entities, including text, images, audio, and video content. Encoding this diverse range of data types into a uniform mathematical representation means that we can now quantify semantic similarities by calculating the mathematical distance between these representations. This breakthrough enables “fuzzy” semantic similarity searches across a wide array of content types. While vector databases aren’t new and won’t resolve all current data challenges, their ability to perform these semantic searches across vast datasets and feed that information to LLMs unlocks previously unattainable functionality. ... We are in the early stages of leveraging vectors, both in the emerging generative AI space and the classical ML domain. It’s important to recognise that vectors don’t come as an out-of-the-box solution and can’t simply be bolted onto existing AI or ML programs. However, as they become more prevalent and universally adopted, we can expect the development of software layers that will make it easier for less technical teams to apply vector technology effectively.


AI Can Reshape Insight Delivery and Decision-making

Moving on to risk, Tubbs shares that AI plays a pivotal role in the organizational risk mitigation strategy. With AI, the organization can identify potential risks and propose countermeasures that can significantly contribute to business stability. Therefore, Visa can be proactive in fighting fraud and risks, specifically in the payment landscape. Another usage of AI at Visa is in making real-time decisions with real-time analytics. Given the billions of transactions a month, real-time analytics enable the organization to comprehend what the transactions mean and how to make prompt decisions around anomalous behavior. AI also fosters collaboration in the ecosystem and organization by encouraging different teams to work towards a shared objective. Summing up, she refers to the cost-saving aspect of AI and maintains that Visa is driven to automate processes that have taken a significant amount of time historically. Shifting to the other side of good AI, Tubbs affirms that AI can also be used by fraudsters for nefarious reasons. To avoid that, Visa constantly evaluates its models and algorithms. She notes that Visa has a dedicated team to look into the dark web to understand the actions of fraudsters.



Quote for the day:

"Successful and unsuccessful people do not vary greatly in their abilities. They vary in their desires to reach their potential." -- John Maxwell

Daily Tech Digest - August 28, 2024

Improving healthcare fraud prevention and patient trust with digital ID

Digital trust involves the use of secure and transparent technologies to protect patient data while enhancing communication and engagement. For example, digital consent forms and secure messaging platforms allow patients to communicate with their healthcare providers conveniently while ensuring that their data remains protected. Furthermore, integrating digital trust technology into healthcare systems can streamline administrative processes, reduce paperwork, and minimize the chances of errors, according to a blog post by Five Faces. This not only enhances operational efficiency but also improves the overall patient experience by reducing wait times and simplifying access to medical services. ... These smart cards, embedded with secure microchips, store vital patient information and health insurance details, enabling healthcare providers to access accurate and up-to-date information during consultations. The use of chip-based ID cards reduces the risk of identity theft and fraud, as these cards are difficult to duplicate and require secure authentication methods. This technology ensures that only authorized individuals can access patient information, thereby protecting sensitive data from unauthorized access.


A CEO's Take on AI in the Workforce

Those ignoring the AI transformation and not uptraining their skilled staff are not putting themselves in a position to make use of untapped data that can provide insights into other areas of opportunity for their business. Making minimal-to-no investments in emerging technology merely delays the inevitable and puts companies at a disadvantage at the hands of their competitors. Alternatively, being too aggressive with AI can lead to security vulnerabilities or critical talent loss. While AI integration is critical to accelerating business outputs, doing so without moderators, data safeguards, and regulators to keep organizations in line with data governance and compliance is actually exposing companies to security issues. ... AI should not replace people, but rather presents an opportunity to better utilize them. AI can help solve time-management and efficiency issues across organizations, allowing skilled people to focus on creative and strategic roles or projects that drive better business value. The role of AI should focus on automating time-consuming, repetitive, administrative tasks, thereby leaving individuals to be more calculated and intentional with their time.


The promise of open banking: How data sharing is changing financial services

The benefits of open banking are multifaceted. Customers gain greater control over their financial data, allowing them to securely share it with authorized providers. This empowers them to explore a wider range of customized financial products and services, ultimately promoting financial stability and well-being. Additionally, open banking fosters innovation within the industry, as Fintech companies leverage customer-consented data to develop cutting-edge solutions. The Account Aggregator (AA) framework, regulated by the Reserve Bank of India (RBI), is a cornerstone of open banking in India. AAs act as trusted intermediaries, allowing users to consolidate their financial data from various sources, including banks, mutual funds, and insurance companies, into a single platform. ... APIs empower platforms to aggregate FD offerings from a multitude of banks across India. This provides investors with a comprehensive view of available options, allowing them to compare interest rates, tenures, minimum deposit requirements, and other features within a single platform. This transparency empowers informed decision-making, enabling investors to select the FD that best aligns with their risk appetite and financial goals.


What are the realistic prospects for grid-independent AI data centers in the UK?

Already colo companies looking to develop in the UK are evaluating on-site gas engine power generation and CHP (combined heat and power). To date, UK CHP projects have been hampered by a lack of grid capacity. Microgrid developments are viewed as a solution to this. CHP and microgrids should also make data center developments more appealing for local government planning departments. ... Data center developments have hit front-line politics with Rachel Reeves, the new UK Labour government’s Chancellor of the Exchequer (Finance Minister) citing data center infrastructure and reform of planning law as critical to growing the country’s economy. Already some projects that were denied planning permission look likely to be reconsidered with reports that “Deputy Prime Minister Angela Rayner" had “recovered two planning appeals for data centers in Buckinghamshire and Hertfordshire (already)”. It seems clear that to have any realistic chance of meeting data center capacity demand for AI, cloud and other digital services will require on-site power generation in some form or other. 


Why Every IT Leader Needs a Team of Trusted Advisors

When seeking advisors, look for individuals with the time and willingness to join your kitchen cabinet, Kelley says. "Be mindful of their schedules and obligations, since they are doing you a favor," he notes. Additionally, if you're offering any perks, such as paid meals, travel reimbursement, or direct monetary payments, let them know upfront. Such bonuses are relatively rare, however. "More than likely, you’re talking about individual or small group phone calls or meetings." Above all, be honest and open with your team members. "Let them know what kind of help you need and the time frame you are working under," Kelley says. "If you've heard different or contradictory advice from other sources, bring it up and get their reaction," he recommends. Keep in mind that an advisory team is a two-way relationship. Kelley recommends personalizing each connection with an occasional handwritten note, book, lunch, or ticket to a concert or sporting event. On the other hand, if you decide to ignore their input or advice, you need to explain why, he suggests. Otherwise, they might conclude that being a team participant is a waste of time. Also be sure to help your team members whenever they need advice or support. 


Why CI and CD Need to Go Their Separate Ways

Continuous promotion is a concept designed to bridge the gap between CI and CD, addressing the limitations of traditional CI/CD pipelines when used with modern technologies like Kubernetes and GitOps. The idea is to insert an intermediary step that focuses on promotion of artifacts based on predefined rules and conditions. This approach allows more granular control over the deployment process, ensuring that artifacts are promoted only when they meet specific criteria, such as passing certain tests or receiving necessary approvals. By doing so, continuous promotion decouples the CI and CD processes, allowing each to focus on its core responsibilities without overextension. ... Introducing a systematic step between CI and CD ensures that only qualified artifacts progress through the pipeline, reducing the risk of faulty deployments. This approach allows the implementation of detailed rule sets, which can include criteria such as successful test completions, manual approvals or compliance checks. As a result, continuous promotion provides greater control over the deployment process, enabling teams to automate complex decision-making processes that would otherwise require manual intervention.


CIOs listen up: either plan to manage fast-changing certificates, or fade away

Even when organizations finally decide to set policies and standardize security for new deployments, mitigating the existing deployments is a huge effort, and in the modern stack, there’s no dedicated operations team, he says. That makes it more important for CIOs to take ownership of the problem, Cairns points out. “Especially in larger, more complex and global organizations, the magnitude of trying to push these things through the organization is often underestimated,” he says. “Some of that is having a good handle on the culture and how to address these things in terms of messaging, communications, enforcement of the right policies and practices, and making sure you’ve got the proper stakeholder buy-in at the various points in this process — a lot of governance aspects.” ... Many large organizations will soon need to revoke and reprovision TLS certificates at scale. One in five Fortune 1000 companies use Entrust as their certificate authority, and from November 1, 2024, Chrome will follow Firefox in no longer trusting TLS certificates from Entrust because of a pattern of compliance failures, which the CA argues were, ironically, sometimes caused by enterprise customers asking for more time to deal with revocation. 


Effortless Concurrency: Leveraging the Actor Model in Financial Transaction Systems

In a financial transaction system, the data flow for handling inbound payments involves multiple steps and checks to ensure compliance, security, and accuracy. However, potential failure points exist throughout this process, particularly when external systems impose restrictions or when the system must dynamically decide on the course of action based on real-time data. ... Implementing distributed locks is inherently more complex, often requiring external systems like ZooKeeper, Consul, Hazelcast, or Redis to manage the lock state across multiple nodes. These systems need to be highly available and consistent to prevent the distributed lock mechanism from becoming a single point of failure or a bottleneck. ... In this messaging based model, communication between different parts of the system occurs through messages. This approach enables asynchronous communication, decoupling components and enhancing flexibility and scalability. Messages are managed through queues and message brokers, which ensure orderly transmission and reception of messages. ... Ensuring message durability is crucial in financial transaction systems because it allows the system to replay a message if the processor fails to handle the command due to issues like external payment failures, storage failures, or network problems.


Hundreds of LLM Servers Expose Corporate, Health & Other Online Data

Flowise is a low-code tool for building all kinds of LLM applications. It's backed by Y Combinator, and sports tens of thousands of stars on GitHub. Whether it be a customer support bot or a tool for generating and extracting data for downstream programming and other tasks, the programs that developers build with Flowise tend to access and manage large quantities of data. It's no wonder, then, that the majority of Flowise servers are password-protected. ... Leaky vector databases are even more dangerous than leaky LLM builders, as they can be tampered with in such a way that does not alert the users of AI tools that rely on them. For example, instead of just stealing information from an exposed vector database, a hacker can delete or corrupt its data to manipulate its results. One could also plant malware within a vector database such that when an LLM program queries it, it ends up ingesting the malware. ... To mitigate the risk of exposed AI tooling, Deutsch recommends that organizations restrict access to the AI services they rely on, monitor and log the activity associated with those services, protect sensitive data trafficked by LLM apps, and always apply software updates where possible.


Generative AI vs. Traditional AI

Traditional AI, often referred to as “symbolic AI” or “rule-based AI,” emerged in the mid-20th century. It relies on predefined rules and logical reasoning to solve specific problems. These systems operate within a rigid framework of human-defined guidelines and are adept at tasks like data classification, anomaly detection, and decision-making processes based on historical data. In sharp contrast, generative AI is a more recent development that leverages advanced ML techniques to create new content. This form of AI does not follow predefined rules but learns patterns from vast datasets to generate novel outputs such as text, images, music, and even code. ... Traditional AI relies heavily on rule-based systems and predefined models to perform specific tasks. These systems operate within narrowly defined parameters, focusing on pattern recognition, classification, and regression through supervised learning techniques. Data fed into these models is typically structured and labeled, allowing for precise predictions or decisions based on historical patterns. In contrast, generative AI uses neural networks and advanced ML models to produce human-like content. This approach leverages unsupervised or semi-supervised learning techniques to understand underlying data distributions.



Quote for the day:

"Opportunities don't happen. You create them." -- Chris Grosser