Daily Tech Digest - July 16, 2024

Learning cloud cost management the hard way

The rapid adoption of cloud technologies has outpaced the development of requisite skills within many organizations, leading to inefficiencies in provisioning, managing, and optimizing cloud resources. The No. 1 excuse that I hear from those overspending on cloud computing is that they can’t find the help they need to maximize cloud resources. They are kicking years of cloud-powered technical debt down the road, hoping that someone or some tool will come along to fix everything. ... Automation tools powered by AI can play a crucial role in ensuring that resources are only provisioned when needed and decommissioned when not in use, thus preventing idle resources from unnecessarily accruing costs. Moreover, a robust cost governance framework is essential for cloud cost management. This framework should include policies for resource provisioning, usage monitoring, and cost optimization. ... It’s frustrating that we’ve yet to learn how to do this correctly. 2020 wants their cloud spending problems back. This is not the only survey I’ve seen that reveals cost inefficacies on a massive scale. I see this myself.


The Interplay Of IoT And Critical Infrastructure Security

The vast number of interconnected devices in an IoT-driven infrastructure creates a massive attack surface. These objects often have limited processing power and may miss out on robust security features, which potentially makes them easy targets. ... The silver lining is that such heterogeneous cabling architectures can be glued together to deliver fiber-grade connectivity without the need to build new high-cost networks from scratch. An illustration of this tactic is Actelis’ hybrid-fiber technology harnessing high-performance managed Ethernet access switches and extenders to make the most of existing network infrastructures and provide gigabit speeds via virtually any wireline media. Actelis’ hybrid-fiber networking concept includes sections of fiber (for the easy-to-reach-with-fiber locations) and copper/coax that can be upgraded with Actelis’ technology to run fiber-grade communication. The company does both and provides management, security, and end-to-end integration for such entire networks, including fiber parts. This is important, as it represents a significant part of the market, selling both fiber and non-fiber networking.


MIT Researchers Introduce Generative AI for Databases

The researchers noticed that SQL didn’t provide an effective way to incorporate probabilistic AI models, but at the same time, approaches that use probabilistic models to make inferences didn’t support complex database queries. They built GenSQL to fill this gap, enabling someone to query both a dataset and a probabilistic model using a straightforward yet powerful formal programming language. A GenSQL user uploads their data and probabilistic model, which the system automatically integrates. Then, she can run queries on data that also get input from the probabilistic model running behind the scenes. This not only enables more complex queries but can also provide more accurate answers. For instance, a query in GenSQL might be something like, “How likely is it that a developer from Seattle knows the programming language Rust?” ... Incorporating a probabilistic model can capture more complex interactions. Plus, the probabilistic models GenSQL utilizes are auditable, so people can see which data the model uses for decision-making. In addition, these models provide measures of calibrated uncertainty along with each answer.


Securing Applications in an Evolving API Security Landscape

Adding further fuel to the fire, threat actors constantly innovate, developing new techniques to target APIs. This includes exploiting zero-day vulnerabilities, leveraging stolen credentials through phishing attacks, or even using bots to automate brute-force attacks against API endpoints. Traditionally, API security focused on reactive measures, patching vulnerabilities and detecting attacks after they happened. However, GenAI allows attackers to automate tasks, churning out mass phishing campaigns or crafting malicious code specifically designed to exploit API weaknesses. These attacks, known for their speed and volume, easily overwhelm traditional security solutions designed for static environments. ... APIs are crucial for modern business operations, driving innovation and customer interactions. However, without proper security measures, APIs can be vulnerable to attacks that can put sensitive data at risk, disrupt operations, and damage customer trust. ... This underscores the real-world impact of inadequate API security, including delayed innovation, frustrated customers, and lost revenue. 


Fundamentals of Descriptive Analytics

Descriptive analytics helps to describe and present data in a format that can be easily understood by a wide variety of business readers. Descriptive analytics rarely attempts to investigate or establish cause-and-effect relationships. As this form of analytics doesn’t usually probes beyond surface analysis, the validity of results is more easily implemented. Some common methods employed in descriptive analytics are observations, case studies, and surveys. ... The main disadvantage of descriptive analytics is that it only reports what has happened in the past or what is happening now without explaining the root causes behind the observed behaviors or without predicting what is about to happen in the future. The analysis is generally limited to few variables and their relationships. However, descriptive analytics becomes a powerful business resource when it is combined with other types of analytics for assessing business performance. While descriptive analytics focuses on reporting past or current events, the other types of analytics explore the root causes behind observed trends and can also predict future outcomes based on historical data analysis.


Strategies for creating seamless and personalised customer journeys

One cannot overlook the fact that CX in financial services is crucial since it influences customer satisfaction levels. In turn, this depends on providing seamless experiences across all touchpoints, beginning with the initial contact, right up to the final delivery and the consumption of financial products or services. Significantly, CX in financial services goes beyond merely offering excellent customer service. Instead, it involves ensuring frictionless, personalised experiences that surpass expectations across every stage of the customer’s journey. Be it banking, insurance or investment services, the ongoing digital transformation has substantially enhanced the expectations of customers. Today, speed, convenience and personalisation are taken for granted. As a result, financial entities that provide superior CX enjoy a clear competitive advantage in both attracting and retaining their customers, leading to increased profitability. ... Navigating complex regulatory environments ensures financial integrity. Regulations uphold standards, protecting consumers. Compliant firms earn trust, enhancing reputation alongside prioritizing customer experience for competitiveness in the market.


Cultivating Continuous Learning: Transforming L&D for the Hybrid Future and Beyond

Adapting L&D for the Digital Age requires strategic initiatives that cater to evolving demands in today's dynamic landscape. This entails embracing technology-driven learning platforms and tools that facilitate remote access, personalised learning paths, and real-time analytics. By leveraging these innovations, organisations can ensure that learning experiences are agile, responsive, and tailored to individual needs. Integrating digital collaboration tools fosters a culture of continuous learning and knowledge sharing across geographies, enhancing organisational agility and competitiveness in a digital-first world. Future-proofing L&D involves identifying and developing critical skills that will drive success in tomorrow's workplace. This proactive approach requires foresight into emerging trends and industry shifts, preparing employees with skills such as adaptability, digital literacy, creativity, and emotional intelligence. Implementing forward-thinking training programs and certifications ensures that employees remain adept and resilient in the face of technological advancements and market disruptions.


Cybersecurity Can Be a Businesses Enabler

Cybersecurity controls and protective mechanisms can protect an organization's assets - its data, people, technology equipment, etc. By actively protecting assets and preventing data breaches, an organization can avoid potential negative business impact, financial or otherwise. And because the organization does not have to worry about that potential damage, it can operate in a safe and focused fashion. ... Suffering a negative security event indicates some gap or deficiency in an organization's security posture. All organizations have security gaps, but some never report a negative security event - probably because they have invested more resources in differentiating themselves from their competitors. Implementing strong protective measures shows customers that an organization takes security seriously, which makes it a more appealing business partner. ... Customers and partners are becoming more aware of cyber risks, and they prioritize cybersecurity when they consider engaging in business. By implementing effective cybersecurity measures, a company can improve the confidence that potential customers and partners have in it. Over time, this will lead to increased loyalty and trust.


What is transformational leadership? A model for motivating innovation

The most important thing you can do as a transformational leader is to lead by example. Employees will look to you as a role model for behavior in all areas of the workplace. If you lead with authenticity, employees will pick up on that behavior and feel inspired to maintain that high standard for performance. It’s not about manipulating employees into working hard, it’s about leading by example and positively influencing others through a commitment to trust, transparency, and respect. ... To help create change, it’s important to challenge long-standing beliefs in the company and push the status quo by encouraging innovation, creativity, critical thinking, and problem-solving. Transformative leaders should help employees feel comfortable exploring new ideas and opportunities that can inject innovation into the organization. ... As a transformational leader, you will need to encourage your team to feel attached and committed to the vision of the organization. You want to ensure employees feel as committed to these goals as you do as a leader by giving employees a strong sense purpose, rather than attempting to motivate them through fear.


How Post-Quantum Cryptography Can Ensure Resilience

Quantum computing represents a major threat to data security, as it can make attacks against cryptography much more efficient. There are two ways bad actors could use this technology. One is the “Store now, decrypt later” method, in which cybercriminals steal sensitive data and wait until quantum computers have the ability to break its encryption. This is particularly important for you to know if your organization retains data with a long confidentiality span. The other method is to break the data’s digital signatures. A bad actor could “compute” credentials based on publicly available information, then impersonate someone with the authority to sign documents or approve requests. As with the above message, criminals can do this retroactively if older signatures are not updated. Today’s encryption methods cannot stand against the capabilities of tomorrow’s quantum computers. When large-scale quantum computers are built, they will have the computing ability to decrypt many of the current public key cryptography systems. 



Quote for the day:

"The only limit to our realization of tomorrow will be our doubts of today." -- Frank D Roosevelt

Daily Tech Digest - July 15, 2024

For databases that need to hold and query hundreds of petabytes to exabytes of data, there are only a few viable choices. Be very careful about evaluating and planning exabyte-scale databases, as they are huge commitments and hard to change once the data is loaded. ... Data consistency is usually “strong” for SQL databases, meaning that all reads return the latest data. Data consistency may be anything from “eventual” to “strong” for NoSQL databases. Eventual consistency offers lower latency, at the risk of reading stale data. Consistency is the “C” in the ACID properties required for validity in the event of errors, network partitions, and power failures. The four ACID properties are Atomicity, Consistency, Isolation, and Durability. ... Some databases are faster at reads and queries, and others are faster at writes. The mix of reads and writes you expect from your application is a useful number to include in your database selection criteria, and can guide your benchmarking efforts. The optimum choice of index type differs between read-heavy applications and write-heavy applications.


Can we please stop talking about replacing employees with AI?

With mundane, repetitive tasks taken over by AI, your team will have more room for creative, human work. Shift your organizational structure and job descriptions to open up room for the strategic, relationship-oriented work that AI just can’t do as well as a human. With less on employees’ to do lists, leaders and managers can shift their focus to challenging and developing them in other more valuable areas, such as building relationships and soft skills. AI can help here, too, but not in the way you may think. Rather than replacing humans, it can unlock insights (from data most companies already have) that lead to hiring and nurturing the right humans for the job. Software used to identify the important soft skills that lead to success in specific roles can be extremely valuable for both employers and employees. Ultimately, improvements in job performance and satisfaction will boost morale and your bottom line — a win-win. ... A good leader knows that for any organization to survive, people cannot be replaced. AI should instead help make employees exponentially more valuable to their companies by spending more time on creative productivity, and for companies to become more valuable to employees by improving their lives.


The Complexity of Solving Performance Problems

Our brains work in two different modes, particularly as we’re solving problems: intuitive and analytical. Intuitive is the mode that our brain is in when we’re not actively and effortfully thinking about something. It’s automatic. Analytical is the mode where we’re actually applying effort and working diligently in a focused manner in our brains. ... Nobody ever says, “Come fix my system,” without telling you what the problem is, right? They might just tell you that it’s slow, but there’s almost always something along with that. Maybe “It’s slow and I think there’s an IO problem” or “It’s slow and it looks like we’re pegging the CPU between two and three o’clock in the morning.” There’s always a little bit of information. That information gives us the ability to start theorizing early. That’s the main characteristic of this approach. ... With the methodical approach, we’re basically going dogmatically through the steps outlined above. And the key characteristic of this one is that we have the mental discipline to postpone that theorizing step until after we’ve gathered the data.


Liquid cooling solutions for high-density computing applications

One of the major challenges when designing data centers is anticipating end-of-life options and accurately predicting the nominal heat load from the beginning. This unpredictability often leads to a disconnect between projected and actual heat demands. For instance, a system designed for 1MW might only require 150kW in reality. Liquid cooling intensifies this challenge due to fluctuating cooling demands; as servers process large amounts of data, energy demand, and heat rejection peak, then drop drastically when idle. ... Liquid cooling offers significant sustainability gains and surpasses traditional air cooling methods in efficiency. By capturing heat directly at its source, liquid cooling allows operation at higher temperatures within the cooling loop. This facilitates the use of free cooling techniques, leveraging ambient air for cooling without resorting to additional energy-intensive processes. Moreover, the captured heat can be reused, further enhancing energy efficiency and sustainability.


What to do if certificates for passive authentication fail

If you have just one source and it becomes unavailable, your entire authentication process can be compromised. A wise solution would be to use a combination of sources to ensure higher service availability. At the government-to-government level, countries share certificates for passive authentication with each other through diplomatic channels, thus collecting their own database of trusted certificates. ... No matter which sources you trust to get certificates and master lists from—be it a central repository or each publisher directly—the responsibility remains yours. Organizations like the ICAO or BSI may facilitate the process, but they are not liable if something goes wrong. Also, understanding the certificate system and its expiration dates is crucial. You need to set up alerts for approaching expiration dates to ensure you have a fresh certificate ready to replace the old one. The most responsible organizations don’t rely solely on external sources. They collect certificates from various countries, create their own master list, sign it with their own certificate, and ensure that it’s valid.


Ignoring quantum threats in CBDC design is reckless

Relying on quantum mechanics phenomena, it far outpaces the ability of classical computers and supercomputers to solve complex algorithms in both speed and capability. Public-key cryptography is one of its primary targets. Thus, it’s alarming that developers of CBDCs are not prioritizing quantum-resistant designs in their early technical roadmaps. As RAND mathematician Alvin Moon noted, “A steady stream of advances in quantum computing technology is undeniably increasing the urgency to standardize and implement post-quantum algorithms, or algorithms which are resistant to both present-day computing and on-the-horizon quantum computing.” This security flaw represents a serious vulnerability in the US monetary system if central bank digital currencies are eventually adopted by the Federal Reserve and authorized by Congress. Valid concerns about surveillance and potential harm to civil liberties are driving debates and may positively influence laws, regulations and ethical governance standards. Technical designs continue to evolve, albeit slowly, to address privacy risks. However, without secure and resilient technical rails, CBDC adoption will remain a serious risk.


A CISO's Summary Of The Cyber Resilience Act

As a baseline, you need to make sure that whatever you’re building is as secure as is reasonable to be. Your product must have minimal attack surfaces and should be hardened. This means that by default it should not allow unauthorized access; its data is encrypted or protected; and its data and commands can’t be intercepted or manipulated. It also means that your product must keep working, even under a DoS attack, and that it mustn’t interrupt other devices, even when attacked with exploits. Additionally, your product needs to be able to monitor or log changes in the device, as well as receive security updates or rollbacks. This includes direct or remote updates, user notifications about updates and the ability to roll back updates or reset the product to a factory/default state. On top of that, you’ll also need to follow or provide documentation and information about your product, including a description of the design, development and vulnerability handling process. This would also include a formal SBOM, where you’d include more information on the components in your product and its potential vulnerabilities.


The Compelling Need for Low-Code in Legacy Application Modernization

With technology at the centre of India’s transformation initiatives, it is imperative that its base digital infrastructure undergoes transformation if the nation hopes to achieve its goals. However, in this quest to become a digital leader lies a huge challenge: legacy systems. Even today, in many organisations, legacy systems are still used to run daily processes and are a huge hurdle in the path of digital transformation. Legacy systems not only consume a significant portion of IT budgets but also make businesses incapable of adapting to fast-changing customer needs. ... There is another compelling method that has gained huge traction in recent times. Low-code visual software development, which helps developers accelerate the software development process using a friendly and intuitive drag-and-drop process. The low-code development approach offers organisations the ability to modernise their core systems without the cost and risk of traditional development approaches. Let us look at some of the common legacy systems that are prevalent in most organisations today and understand how low-code platforms can be used effectively. 


How To Establish A Culture Of Cyber-Resilience Within An Organization

Workers ought to be aware of how their activities affect the security posture of the company and should feel confident enough to serve as the first line of defense. Additionally essential is routinely upgrading training programs according to the most recent dangerous environment. Organizations may greatly lower their susceptibility to cyberattacks by encouraging a culture of ongoing education and awareness. ... Incorporate cybersecurity into all company operations to ensure consideration of security issues during each undertaking and decision-making process. This interconnectedness allows each department to understand its responsibility in safeguarding the company’s digital assets, leading to a holistic approach to security. One essential first step is to create security rules and processes that support company goals. Making sure cybersecurity needs are part of the planning and development phases of a project facilitates early risk identification and the application of suitable safeguards. ... More than simply putting security technology into place, an organization must develop a cyber-resilience culture. 


Mastering Long-Running Processes in Modern Architectures

It is important to note that long-running processes refer to the act of waiting, not algorithms running for extended periods. This waiting can be due to human actions, external responses, or intentionally allowing time to pass. These processes can take hours, days, weeks, or even longer. ... Besides business reasons for waiting, there are also technical ones, such as asynchronous communication delays, failures in message delivery, and the unavailability of peer services in distributed systems. If not addressed, these issues can lead to cascading failures. ... As soon as you start handling the complexities of distributed systems and long-running processes in payment processing, it is essential to embrace asynchronous communication in your API design. Although payments are usually quick and straightforward, situations like declined credit cards or unavailable services demand a different approach. We can build more adaptable and robust payment systems by designing systems that can handle immediate and delayed responses and using signals like a 202 HTTP code to indicate that processing will continue in the background.



Quote for the day:

"Your problem isn't the problem. Your reaction is the problem." -- Anonymous

Daily Tech Digest - July 13, 2024

Work in the Wake of AI: Adapting to Algorithmic Management and Generative Technologies

Current legal frameworks are struggling to keep pace with the issues arising from algorithmic management. Traditional employment laws, such as those concerning unfair dismissal, often do not extend protections to “workers” as a distinct category. Furthermore, discrimination laws require proof that the discriminatory behaviour was due or related to the protected characteristic, which is difficult to ascertain and prove with algorithmic systems. To mitigate these issues, the researchers recommend a series of measures. These include ensuring algorithmic systems respect workers’ rights, granting workers the right to opt out of automated decisions such as job termination, banning excessive data monitoring and establishing the right to a human explanation for decisions made by algorithms. ... Despite the rapid deployment of GenAI and the introduction of policies around its use, concerns about misuse are still prevalent among nearly 40% of tech leaders. While recognising AI’s potential, 55% of tech leaders have yet to identify clear business applications for GenAI beyond personal productivity enhancements, and budget constraints remain a hurdle for some.


The rise of sustainable data centers: Innovations driving change

Data centers contribute significantly to global carbon emissions, making it essential to adopt measures that reduce their carbon footprint. Carbon usage effectiveness (CUE) is a metric used to assess a data center's carbon emissions relative to its energy consumption. By minimizing CUE, data centers can significantly lower their environmental impact. ... Cooling is one of the largest energy expenses for data centers. Traditional air cooling systems are often inefficient, prompting the need for more advanced solutions. Free cooling, which leverages outside air, is a cost-effective method for data centers in cooler climates. Liquid cooling, on the other hand, uses water or other coolants to transfer heat away from servers more efficiently than air. ... Building and retrofitting data centers sustainably involves adhering to green building certifications like Leadership in Energy and Environmental Design (LEED) and Building Research Establishment Environmental Assessment Method (BREEAM). These certifications ensure that buildings meet high environmental performance standards.


How AIOps Is Poised To Reshape IT Operations

A meaningfully different, as yet underutilized, high-value data set can be derived from the rich, complex interactions of information sources and users on the network, promising to triangulate and correlate with the other data sets available, elevating their combined value to the use case at hand. The challenge in leveraging this source is that the raw traffic data is impossibly massive and too complex for direct ingestion. Further, even compressed into metadata, without transformation, it becomes a disparate stream of rigid, high-cardinality data sets due to its inherent diversity and complexity. A new breed of AIOps solutions is poised to overcome this data deficiency and transform this still raw data stream into refined collections of organized data streams that are augmented and edited through intelligent feature extraction. These solutions use an adaptive AI model and a multi-step transformation sequence to work as an active member of a larger AIOps ecosystem by harmonizing data feeds with the workflows running on the target platform, making it more relevant and less noisy.


Addressing Financial Organizations’ Digital Demands While Avoiding Cyberthreats

The financial industry faces a difficult balancing act, with multiple conflicting priorities at the forefront. Organizations must continually strengthen security around their evolving solutions to keep up in an increasingly competitive and fast-moving landscape. But while strong security is a requirement, it cannot impact usability for customers or employees in an industry where accessibility, agility and the overall user experience are key differentiators. One of the best options to balancing these priorities is the utilization of secure access service edge (SASE) solutions. This model integrates several different security features such as secure web gateway (SWG), zero-trust network access (ZTNA), next-generation firewall (NGFW), cloud access security broker (CASB), data loss prevention (DLP) and network management functions, such as SD-WAN, into a single offering delivered via the cloud. Cloud-based delivery enables financial organizations to easily roll out SASE services and consistent policies to their entire network infrastructure, including thousands of remote workers scattered across various locations, or multiple branch offices to protect private data and users, as well as deployed IoT devices.


Three Signs You Might Need a Data Fabric

One of the most significant challenges organizations face is data silos and fragmentation. As businesses grow and adopt new technologies, they often accumulate disparate data sources across different departments and platforms. These silos make it tougher to have a holistic view of your organization's data, resulting in inefficiencies and missed opportunities. ... You understand that real-time analytics is crucial to your organization’s success. You need to respond quickly to changing market conditions, customer behavior, and operational events. Traditional data integration methods, which often rely on batch processing, can be too slow to meet these demands. You need real-time analytics to:Manage the customer experience. If enhancing a customer’s experience through personalized and timely interactions is a priority, real-time analytics is essential. Operate efficiently. Real-time monitoring and analytics can help optimize operations, reduce downtime, and improve overall efficiency. Handle competitive pressure. Staying ahead of competitors requires quick adaptation to market trends and consumer demands, which is facilitated by real-time insights.


The Tension Between The CDO & The CISO: The Balancing Act Of Data Exploitation Versus Protection

While data delivers a significant competitive advantage to companies when used appropriately, without the right data security measures in place it can be misused. This not only erodes customers’ trust but also puts the company at risk of having to pay penalties and fines for non-compliance with data security regulations. As data teams aim to extract and exploit data for the benefit of the organisation, it is important to note that not all data is equal. As such a risk-based approach must be in place to limit access to sensitive data across the organisation. In doing this the IT system will have access to the full spectrum of data to join and process the information, run through models and identify patterns, but employees rarely need access to all this detail. ... To overcome the conflict of data exploitation versus security and deliver a customer experience that meets customer expectations, data teams and security teams need to work together to achieve a common purpose and align on the culture. To achieve this each team needs to listen to and understand their respective needs and then identify solutions that work towards helping to make the other team successful.


Content Warfare: Combating Generative AI Influence Operations

Moderating such enormous amounts of content by human beings is impossible. That is why tech companies now employ artificial intelligence (AI) to moderate content. However, AI content moderation is not perfect, so tech companies add a layer of human moderation for quality checks to the AI content moderation processes. These human moderators, contracted by tech companies, review user-generated content after it is published on a website or social media platform to ensure it complies with the “community guidelines” of the platform. However, generative AI has forced companies to change their approach to content moderation. ... Countering such content warfare requires collaboration across generative AI companies, social media platforms, academia, trust and safety vendors, and governments. AI developers should build models with detectable and fact-sensitive outputs. Academics should research the mechanisms of foreign and domestic influence operations emanating from the use of generative AI. Governments should impose restrictions on data collection for generative AI, impose controls on AI hardware, and provide whistleblower protection to staff working in the generative AI companies. 


OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

OpenAI isn't alone in attempting to quantify levels of AI capabilities. As Bloomberg notes, OpenAI's system feels similar to levels of autonomous driving mapped out by automakers. And in November 2023, researchers at Google DeepMind proposed their own five-level framework for assessing AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don't yet exist. OpenAI's classification system also somewhat resembles Anthropic's "AI Safety Levels" (ASLs) first published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, though they focus on different aspects. Anthropic's ASLs are more explicitly focused on safety and catastrophic risks (such as ASL-2, which refers to "systems that show early signs of dangerous capabilities"), while OpenAI's levels track general capabilities. However, any AI classification system raises questions about whether it's possible to meaningfully quantify AI progress and what constitutes an advancement. The tech industry so far has a history of overpromising AI capabilities, and linear progression models like OpenAI's potentially risk fueling unrealistic expectations.


White House Calls for Defending Critical Infrastructure

The memo encourages federal agencies "to consult with regulated entities to establish baseline cybersecurity requirements that can be applied across critical infrastructures" while maintaining agility and adaptability to mature with the evolving cyberthreat landscape. ONCD and OMB also urged agencies and federal departments to study open-source software initiatives and the benefits that can be gained by establishing a governance function for open-source projects modeled after the private sector. Budget submissions should identify existing departments and roles designed to investigate, disrupt and dismantle cybercrimes, according to the memo, including interagency task forces focused on combating ransomware infrastructure and the abuse of virtual currency. Meanwhile, the administration is continuing its push for agencies to only use software provided by developers who can attest their compliance with minimum secure software development practices. The national cyber strategy - as well as the joint memo - directs agencies to "utilize grant, loan and other federal government funding mechanisms to ensure minimum security and resilience requirements" are incorporated into critical infrastructure projects.


Unifying Analytics in an Era of Advanced Tech and Fragmented Data Estates

“Data analytics has a last-mile problem,” according to Alex Gnibus, technical product marketing manager, architecture at Alteryx. “In shipping and transportation, you often think of the last-mile problem as that final stage of getting the passenger or the delivery to its final destination. And it’s often the most expensive and time-consuming part.” For data, there is a similar problem; when putting together a data stack, enabling the business at large to derive value from the data is a key enabler—and challenge—of a modern enterprise. Achieving business value from data is the last mile, which is made difficult by complex, numerous technologies that are inaccessible to the final business user. Gnibus explained that Alteryx solves this problem by acting as the “truck” that delivers tangible business value from proprietary data, offering data discovery, use case identification, preparation and analysis, insight-sharing, and AI-powered capabilities. Acting as the easy-to-use interface for a business’ data infrastructure, Alteryx is the AI platform for large-scale enterprise analytics that offers no-code, drag-and-drop functionality that works with your unique data framework configuration as it evolves.



Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel

Daily Tech Digest - July 12, 2024

4 considerations to help organizations implement an AI code of conducts

Many organizations consider reinventing the wheel to accommodate AI tools, but this creates a significant amount of unnecessary work. Instead, they should subject any AI tool to the same rigorous procurement process that applies to any product that concerns data security. The procurement process must also take into consideration the organization’s privacy and ethical standards, to ensure these are never compromised in the name of new technology. ... It’s important to be conscious of the privacy policies of AI tools when using these in an enterprise environment — and be sure to only use these with a commercial license. To address this risk, an AI code of conduct should stipulate that free tools are categorically banned for use in any business context. Instead, employees should be required to use an approved, officially procured commercial license solution, with full privacy protections. ... Every organization needs to remain aware of how their technology vendors use AI in the products and services that they buy from them. To enable this, an AI code of conduct should also enforce policies to enable organizations to keep track of their vendor agreements.


From Microservices to Modular Monoliths

You know who really loves microservices? Cloud hosting companies like Microsoft, Amazon, and Google. They make a lot of money hosting microservices. They also make a lot of money selling you tools to manage your microservices. They make even more money when you have to scale up your microservices to handle the increased load on your system. ... So what do you do when you find yourself in microservice hell? How do you keep the gains you (hopefully) made in breaking up your legacy ball of mud, without having to constantly contend with a massively distributed system? It may be time to (re)consider the modular monolith. A modular monolith is a monolithic application that is broken up into modules. Each module is responsible for a specific part of the application. Modules can communicate with each other through well-defined interfaces. This allows you to keep the benefits of a monolithic architecture, while still being able to break up your application into smaller, more manageable pieces. Yes, you'll still need to deal with some complexity inherent to modularity, such as ensuring modules remain independent while still being able to communicate with one another efficiently. 


Deep Dive: Optimizing AI Data Storage Management

In an AI data pipeline, various stages align with specific storage needs to ensure efficient data processing and utilization. Here are the typical stages along with their associated storage requirements: Data collection and pre-processing: The storage where the raw and often unstructured data is gathered and centralized (increasingly into Data Lakes) and then cleaned and transformed into curated data sets ready for training processes. Model training and processing: The storage that feeds the curated data set into GPUs for processing. This stage of the pipeline also needs to store training artifacts such as the hyper parameters, run metrics, validation data, model parameters and the final production inferencing model. Inferencing and model deployment: The mission-critical storage where the training model is hosted for making predictions or decisions based on new data. The outputs of inferencing are utilized by applications to deliver the results, often embedded into information and automation processes. Storage for archiving: Once the training stage is complete, various artifacts such as different sets of training data and different versions of the model need to be stored alongside the raw data.


RAG (Retrieval Augmented Generation) Architecture for Data Quality Assessment

RAG is basically designed to leverage LLMs on your own content or data. It involves retrieving relevant content to augment the context or insights as part of the generation process. However, RAG is an evolving technology with both strengths and limitations. RAG integrates information retrieval from a dedicated, custom, and accurate knowledge base, reducing the risk of LLMs offering general or non-relevant responses. For example, when the knowledge base is tailored to a specific domain (e.g., legal documents for a law firm), RAG equips the LLM with relevant information and terminology, improving the context and accuracy of its responses. At the same time, there are limitations associated with RAG. RAG heavily relies on the quality, accuracy, and comprehensiveness of the information stored within the knowledge base. Incomplete, inaccurate or missing information or data can lead to misleading or irrelevant retrieved data. Overall, the success of RAG hinges on quality data. So, how are RAG models implemented? RAG has basically two key components: a retriever model and a generator model.
 

NoSQL Database Growth Has Slowed, but AI Is Driving Demand

As for MongoDB, it too is targeting generative AI use cases. In a recent post on The New Stack, developer relations team lead Rick Houlihan explicitly compared its solution to PostgreSQL, a popular open source relational database system. Houlihan contended that systems like PostgreSQL were not designed for the type of workloads demanded by AI: “Considering the well-known performance limitations of RDBMS when it comes to wide rows and large data attributes, it is no surprise that these tests indicate that a platform like PostgreSQL will struggle with the kind of rich, complex document data required by generative AI workloads.” Unsurprisingly, he concludes that using a document database (like MongoDB) “delivers better performance than using a tool that simply wasn’t designed for these workloads.” In defense of PostgreSQL, there is no shortage of managed service providers for Postgres that provide AI-focused functionality. Earlier this year I interviewed a “Postgres as a Platform” company called Tembo, which has seen a lot of demand for AI extensions. “Postgres has an extension called pgvector,” Tembo CTO Samay Sharma told me.


Let’s Finally Build Continuous Database Reliability! We Deserve It

While we worked hard to make sure our CI/CD pipelines are fast and learned how to deploy and test applications reliably, we didn’t advance our databases world. It’s time to get continuous reliability around databases as well. To do that, developers need to own their databases. Once developers take over the ownership, they will be ready to optimize the pipelines, thereby achieving continuous reliability for databases. This shift of ownership needs to be consciously driven by technical leaders. ... The primary advantage of implementing database guardrails and empowering developers to take ownership of their databases is scalability. This approach eliminates team constraints, unlocking their complete potential and enabling them to operate at their optimal speed. By removing the need to collaborate with other teams that lack comprehensive context, developers can work more swiftly, reducing communication overhead. Just as we recognized that streamlining communication between developers and system engineers was the initial step, leading to the evolution into DevOps engineers, the objective here is to eliminate dependence on other teams. 


Digital Transformation: Making Information Work for You

With information generated by digital transactions, the first goal is to ensure that the knowledge garnered does not get stuck between only those directly participating in the transaction. Lessons learned from the transaction should become part of the greater organizational memory. This does not mean that every single transaction needs to be reported to every person in the organization. It also doesn’t mean that the information needs to be elevated in the same form or at the same velocity to all recipients. Those participating in the transaction need an operational view of the transaction. This needs to happen in real time. The information is the enabler of the human-to-computer-to-human transaction and the speed of that information flow needs to be as quick as it was in the human-to-human transaction. Otherwise, it will be viewed as a roadblock instead of an enabler. As it escalates to the next level of management, the information needs to evolve to a managerial view. Managers are more interested in anomalies and outliers or data at a summary level. This level of information is no less impactful to the organizational memory but is associated with a different level of decision-making. 


Generative AI won’t fix cloud migration

The allure of generative AI lies in its promise of automation and efficiency. If cloud migration was a one-size-fits-all scenario, that would work. But each enterprise faces unique challenges based on its technological stack, business requirements, and regulatory environment. Expecting a generative AI model to handle all migration tasks seamlessly is unrealistic. ... Beyond the initial investment in AI tools, the hidden costs of generative AI for cloud migration add up quickly. For instance, running generative AI models often requires substantial computational resources, which can be expensive. Also, keeping generative AI models updated and secure demands robust API management and cybersecurity measures. Finally, AI models need continual refinement and retraining to stay relevant, incurring ongoing costs. ... Successful business strategy is about what works well and what needs to be improved. We all understand that AI is a powerful tool and has been for decades, but it needs to be considered carefully—once you’ve identified the specific problem you’re looking to solve. Cloud migration is a complex, multifaceted process that demands solutions tailored to unique enterprise needs. 


Navigating Regulatory and Technological Shifts in IIoT Security

Global regulations play a pivotal role in shaping the cybersecurity landscape for IIoT. The European Union’s Cyber Resilience Act (CRA) is a prime example, setting stringent requirements for manufacturers supplying products to Europe. By January 2027, companies must meet comprehensive standards addressing security features, vulnerability management, and supply chain security. ... The journey towards securing IIoT environments is multifaceted, requiring manufacturers to navigate regulatory requirements, technological advancements, and proactive risk management strategies. Global regulations like the EU’s Cyber Resilience Act set critical standards that drive industry-wide improvements. At the same time, technological solutions such as PKI and SBOMs play essential roles in maintaining the integrity and security of connected devices. By adopting a collaborative approach and leveraging robust security frameworks, manufacturers can create resilient IIoT ecosystems that withstand evolving cyber threats. The collective effort of all stakeholders is paramount to ensuring the secure and reliable operation of industrial environments in this new era of connectivity.


Green Software Foundation: On a mission to decarbonize software

One of the first orders of business in increasing awareness: getting developers and companies to understand what green software really is. Instead of reinventing the wheel, the foundation reviewed a course in the concepts of green software that Hussain had developed while at Microsoft. To provide an easy first step for organizations to take, the foundation borrowed from Hussain’s materials and created a new basic training course, “Principles of Green Software Engineering.” The training is only two or three hours long and level-sets students to the same playing field. ... When it comes to software development, computing inefficiencies (and carbon footprints) are more visible — bulky libraries for example — and engineers can improve it more easily. Everyday business operations, on the other hand, are a tad opaque but still contribute to the company’s overall sustainability score. Case in point: The carbon footprint of a Zoom call is harder to measure, Hussain points out. The foundation helped to define a Software Carbon Intensity (SCI) score, which applies to all business operations including software development and SaaS programs employees might use.



Quote for the day:

"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell

Daily Tech Digest - July 11, 2024

Will AI Ever Pay Off? Those Footing the Bill Are Worrying Already

Though there is some nervousness around how long soaring demand can last, no one doubts the business models for those at the foundations of the AI stack. Companies need the chips and manufacturing they, and they alone, offer. Other winners are the cloud companies that provide data centers. But further up the ecosystem, the questions become more interesting. That’s where the likes of OpenAI, Anthropic and many other burgeoning AI startups are engaged in the much harder job of finding business or consumer uses for this new technology, which has gained a reputation for being unreliable and erratic. Even if these flaws can be ironed out (more on that in a moment), there is growing worry about a perennial mismatch between the cost of creating and running AI and what people are prepared to pay to use it. ... Another big red flag, economist Daron Acemoglu warns, lies in the shared thesis that by crunching more data and engaging more computing power, generative AI tools will become more intelligent and more accurate, fulfilling their potential as predicted. His comments were shared in a recent Goldman Sachs report titled “Gen AI: Too Much Spend, Too Little Benefit?”


How top IT leaders create first-mover advantage

“Some of the less talked about aspects of a high-performing team are the human traits: trust, respect, genuine enjoyment of each other,” Sample says. “I’m looking at experience and skills, but I’m also thinking about how the person will function collaboratively with the team. Do I believe they’ll have the best interest of the team at heart? Can the team trust their competency? Sample also says he focuses on “will over skill.” “Qualities like curiosity and craftsmanship are sustainable, flexible skills that can evolve with whatever the new ‘toy’ in technology is,” he says. “If you’re approaching work with that bounty of curiosity and that willing mindset, the skills can adapt.” ... Steadiness and calm from the leader create the kind of culture where people are encouraged to take risks and work together to solve big problems and execute on bold agendas. That, ultimately, is what enables a technology organization to capitalize on innovative technologies. In fact, reflecting on his legacy as a CIO, Sample believes it’s not really about the technology; it’s about the people. His success, he says, has been in building the teams that operate the technology.


Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge?

The patchwork approach is used by federal agencies in the US. Different agencies have responsibility for different verticals and can therefore introduce regulations more relevant to specific organizations. For example, the FCC regulates interstate and international communications, the SEC regulates capital markets and protects investors, and the FTC protects consumers and promotes competition. ... The danger is that the EU’s recent monolithic AI Act will go the same way as GDPR. Kolochenko prefers the US model. He believes the smaller, more agile method of targeted regulations used by US federal agencies can provide better outcomes than the unwieldy and largely static monolithic approach adopted by the EU. ... To regulate or not to regulate is a rhetorical question – of course AI must be regulated to minimize current and future harms. The real questions are whether it will be successful (no, it will not), partially successful (perhaps, but only so far as the curate’s egg is good), and will it introduce new problems for AI-using businesses (from empirical and historical evidence, yes).


The Team Sport of Cloud Security: Breaking Down the Rules of the Game

Cloud security today is too complicated to fall on the shoulders of one person or party. For this reason, most cloud services operate on a shared responsibility model that divvies security roles between the CSP and the customer. Large players in this space, such as AWS and Microsoft Azure, have even published frameworks to the lines of liability in the sand. While the exact delineations can change depending on the service model ... However, while the expectations laid out in shared responsibility models are designed to reduce confusion, customers often struggle to conceptualize what this framework looks like in practice. And unfortunately, when there’s a lack of clarity, there’s a window of opportunity for threat actors. ... The best-case scenario for mitigating cloud security risks is when CSPs and customers are transparent and aligned on their responsibilities right from the beginning. Even the most secure cloud services aren’t foolproof, so customers need to be aware of what security elements they’re “owning” versus what falls in the court of their CSP. 


AI's new frontier: bringing intelligence to the data source

There has been a shift with organisations exploring how to bring AI to their data rather than uploading proprietary data to AI providers. This shift reflects a growing concern for data privacy and the desire to maintain control over proprietary information. Business leaders believe they can better manage security and privacy while still benefiting from AI advancements by keeping data in-house. Bringing AI solutions directly to an organisation’s data eliminates the need to move vast amounts of data, reducing security risks and maintaining data integrity. Crucially, organisations can maintain strict control over their data by implementing AI solutions within their own infrastructure to ensure that sensitive information remains protected and complies with privacy regulations. Additionally, keeping data in-house minimises the risks associated with data breaches and unauthorised access from third parties, providing peace of mind for both the organisation and its clients. Advanced AI-driven data management tools deliver this solution to businesses, automating data cleaning, validation, and transformation processes to ensure high-quality data for AI training.


How AI helps decode cybercriminal strategies

The biggest use case for AI is its ability to process, analyze, and interpret natural language communication efficiently. AI algorithms can quickly identify patterns, correlations, and anomalies within massive datasets, providing cybersecurity professionals with actionable insights. This capability not only enhances the speed and accuracy of threat detection but also enables a more proactive and comprehensive approach to securing organizations against dark web-originated threats. This is vital in an environment where the difference between detecting a threat early in the cyber kill chain vs once the attacker has achieved their objective can be hundreds of thousands of dollars. ... Another potential use case of AI is in quickly identifying and alerting specific threats relating to an organization, helping with the prioritization of intelligence. One thing an AI could look for in data is intention – to assess whether an actor is planning an attack, is asking for advice, is looking to buy or to sell access or tooling. Each of these indicates a different level of risk for the organization, which can inform security operations.


Widely Used RADIUS Authentication Flaw Enables MITM Attacks

The attack scenario - researchers say a "a well-resourced attacker" could make it practical - fools the Remote Authentication Dial-In User Service into granting access to a malicious user without the attacker having to know or guess a login password. Despite its 1990s heritage and reliance on the MD5 hashing algorithm, many large enterprises still use the RADIUS protocol for authentication to the VPN or Wi-Fi network. It's also "universally supported as an access control method for routers, switches and other network infrastructure," researchers said in a paper published Tuesday. The protocol is used to safeguard industrial control systems and 5G cellular networks. ... For the attack to succeed, the hacker must calculate a MD5 collision within the client session timeout, where the common defaults are either 30 seconds or 60 seconds. The 60-second default is typically for users that have enabled multifactor authentication. That's too fast for the researchers, who were able to reduce the compute time down to minutes from hours, but not down to seconds. An attacker working with better hardware or cloud computing resources might do better, they said.


Can RAG solve generative AI’s problems?

Currently, RAG offers probably the most effective way to enrich LLMs with novel and domain-specific data. This challenge is particularly important for such systems as chatbots, since the information they generate must be up to date. However, RAG cannot reason iteratively, which means it is still dependent on the underlying dataset (knowledge base, in RAG’s case). Even though this dataset is dynamically updated, if the information there isn’t coherent or is poorly categorized and labeled, the RAG model won’t be able to understand that the retrieval data is irrelevant, incomplete, or erroneous. It would also be naive to expect RAG to solve the AI hallucination problem. Generative AI algorithms are statistical black boxes, meaning that developers do not always know why the model hallucinates and whether it is caused by insufficient or conflicting data. Moreover, dynamic data retrieval from external sources does not guarantee there are no inherent biases or disinformation in this data. ... Therefore, RAG is in no way a definitive solution. In the case of sensitive industries, such as healthcare, law enforcement, or finance, fine-tuning LLMs with thoroughly cleaned, domain-specific datasets might be a more reliable option.


Navigating the New Data Norms with Ethical Guardrails for Ethical AI

To convert ethical principles into a practical roadmap, businesses need a clear framework aligned with industry standards and company values. Also, beyond integrity and fairness, businesses must demonstrate tangible ROI by focusing on metrics like customer acquisition cost, lifetime value, and employee engagement. Operationalizing ethical guardrails involves creating a structured approach to ensure AI deployment aligns with ethical standards. Companies can start by fostering a culture of ethics through comprehensive employee education programs that emphasize the importance of fairness, transparency, and accountability. Establishing clear policies and guidelines is crucial, alongside implementing robust risk assessment frameworks to identify and mitigate potential ethical issues. Regular audits and continuous monitoring should be part of the process to ensure adherence to these standards. Additionally, maintaining transparency for end-users by openly sharing how AI systems make decisions, and providing mechanisms for feedback, further strengthens trust and accountability.
 

How CIOs Should Approach DevOps

CIOs should have a vision for scaling DevOps across the enterprise for unlocking its full range of benefits. A collaborative culture, automation, and technical skills are all necessary for achieving scale. Besides these, the CIO needs to think about the right team structure, security landscape, and technical tools that will take DevOps safely from pilot to production to enterprise scale. It is recommended to start small: dedicate a small platform team focused only on building a platform that enables automation of various development tasks. Build the platform in small steps, incrementally and iteratively. Put together another small team with all the skills required to deliver value to customers. Constantly gather customer feedback and incorporate it to improve development at every stage. Ultimately, customer satisfaction is what matters the most in any DevOps program. Security needs to be part of every DevOps process right from the start. When a process is automated, so should its security and compliance aspects. Frequent code reviews and building awareness among all the concerned teams will help to create secure, resilient applications that can be scaled with confidence.



Quote for the day:

“There is no failure except in no longer trying.” -- Chris Bradford

Daily Tech Digest - July 10, 2024

How platform teams lead to better, faster, stronger enterprises

Platform teams are uniquely equipped to optimize resource allocation because they sit in between developers and the cloud infrastructure and compute that developers need, and are able to maximize the efficiency and effectiveness of software development processes. With their unique set of skills and expertise, they effectively collaborate with other teams, including developers, data scientists, and operations teams, to accurately understand their needs and pain points. Using a product approach, platform teams remove barriers for developers and operations teams by offering shared services for developer self-service, enabling faster modernization within organizational boundaries and automation to simplify the management of applications and Kubernetes clusters in the cloud. Fostering a culture of innovation, platform teams play a crucial role in keeping the organization at the forefront of emerging trends and technologies. This enables enterprises to provide innovative solutions that set them apart in the market.


Developing An AI Uuse Policy

An AI Use Policy is designed to ensure that any AI technology used by your business is done so in a safe, reliable and appropriate manner that minimises risks. It should be developed to inform and guide your employees on how AI can be used within your business. ... Perhaps the most important part for the majority of your employees, set specific do’s and don’ts for inputs and outputs. This is to ensure compliance with data security, privacy and ethical standards. For example, “Don’t input any company confidential, commercially sensitive or proprietary information”, “Don’t use AI tools in a way that could inadvertently perpetuate or reinforce bias” and “Don’t input any customer or co-worker’s personal data”. For outputs, guidance can reiterate to staff the potential for misinformation or ‘hallucinations’ generated by AI. Consider rules such as “Clearly label any AI generated content”, “Don’t share any output without careful fact-checking” or “Make sure that a human has the final decision when using AI to help make a decision which could impact any living person


Synergy between IoT and blockchain transforming operational efficiency

The synergy between the two technologies is integral to achieving Industry 4.0 goals, including digital transformation, decentralised connectivity, and smart industry advancements. Via this integration, organisations can achieve real-time visibility into production operations, optimise supply chain processes, and enhance overall efficiency. ... In regulated industries like pharmaceutical manufacturing, where compliance is crucial, integrating IoT and Blockchain lets companies onboard suppliers to upload raw material info, batch numbers, and quality checks to a blockchain ledger. IoT devices automate data acquisition during manufacturing and storage, ensuring data integrity and transparency. In smart city ecosystems, local authorities share data with service providers for waste management, traffic updates, and more. Traffic data from sensors can be securely uploaded to a blockchain, where third-party services like food delivery and ridesharing can access it to optimise operations. Logistics companies use IoT systems to gather data on location and handling, which is uploaded to a blockchain ledger to track goods, estimate delivery time, and provide real-time updates.


Ignore Li-ion fire risks at your peril

Li-ion batteries are prone to destructive and hard-to-control fires. There have been several reported incidents in data centers, some of which have led to serious outages, but they are not well-documented or systematically studied. ... A commonly held view is that Li-ion’s fire risk in the data center is overstated, partly as a result of marketing by vendors of alternative chemistries such as salt and nickel-zinc. If these products are promoted as a “safe” alternative, then it will (it is speculated) create a perception that Li-ion is “unsafe.” After assessing the evidence, examining the science, and hearing from data center operators at recent member meetings, Uptime Institute is taking a cautious and practicable stance at this point. While it is true that Li-ion batteries have a higher risk of fire compared with other chemistries, and these fires are particularly problematic, Uptime Institute engineers do not think Li-ion batteries should be rejected out of hand. ... Data center builders and operators should carefully consider the benefits of Li-ion batteries alongside the risks. As well as the obvious risk of serious fires, there are financial and reputational risks in preparing for, avoiding, and responding to such incidents.


More than a CISO: the rise of the dual-titled IT leader

Dual-title roles give CISOs new levers to work with and more scope to drive strategic integration and alignment of cybersecurity within the organization. ... Belknap finds having his own team of engineers puts him in a stronger position when working with partners. When looking for support or assistance with a project, his team will have already built something, reducing the amount of work needed from the partner team. “This means we can lean on them to be responsible for the things that only they can do. I don’t have to pull them into the work that only I can do or the work that’s not aligned to their expertise,” he says. These dual-title roles also recognize how CISOs are increasingly operating as technology leaders and operators of the organization, according to Adam Ely, head of digital products at Fidelity Investments who was formerly the firm’s CISO and has a long history in security. Ely says that as CISOs typically work across an organization, know how the business lines work, and are day-to-day leaders of people and technology as well as crisis managers, it stands them in good stead for dual-title or more senior positions. 


You Can’t Wish Away Technology Complexity

Every business succeeds because of technology. Every person gets paid by technology. The value of our currency itself is about technology. Of course, it is not only about technology. But tell that to the CFO or CLO. When it is about finance, there is very little pushback in saying it is all about money. When it is about legal, there is no push-back about it being about law. I’ve noticed only technologists pull back and say, “You’re right, it’s not about technology.” ... See what people often forget that technology complexity is cool on multiple levels. It gives us the ability to make different choices for stakeholders and customers (I mean real customers not stakeholders that think they are customers – note to business stakeholders, you and I get our paychecks from the same place, you are not my customer. Our customer is my customer). But while this complexity allows for choice, it also creates a dependency on understanding those choices. Or a dependency on a professional who does. I don’t pretend to understand medicine. That is why I ask doctors what to do.


Electronic Health Record Errors Are a Serious Problem

The exposure of healthcare records, in even minor ways, leaves patients highly vulnerable. “I never reached out to this woman [whose records were entered into my father’s], but I had all her contact information. I could have gone to her house and handed her the copy of the results I had found in my dad’s records,” Hollingsworth says. ... Data aggregators pose a further risk. These organizations may collect deidentified data to perform analyses on population-level health issues for both healthcare organizations and insurance companies. “Are they following the same security standards that we follow in the health care transaction world?” Ghanayem asks. “I don’t know.” ... Clear distinctions between important information fields must be made to cut down on adjacency errors. Concise patient summaries at the beginning of each record and usable search features may increase usability and decrease frustration that leads to the introduction of errors. And refining when alerts are issued can decrease alert fatigue, which may lead providers to simply ignore alerts even when they are valid.


Diversifying cyber teams to tackle complex threats

To make a significant change and deliver a more diverse cyber workforce, we need to focus on leadership and change our language and processes for recruitment. This takes courage and is the biggest challenge organizations face. Having a diverse team helps others see it is a place for them. It isn’t just about attracting talent; it’s also about openness and retaining talent. Organizations need to help individuals from diverse backgrounds to see themselves as role models who need to be out shouting about the opportunities within the sector. Diversity fosters a sense of belonging and inclusivity making the cybersecurity field more attractive to a wider range of individuals. When potential recruits see relatable role models within a team, it breaks down the traditional and somewhat homogenous perception of cybersecurity. This inclusivity is crucial for attracting talent from underrepresented groups, particularly women and minority groups, who may not have traditionally seen themselves in cybersecurity roles. A diverse team with strong role models creates a positive feedback loop. 


Nanotechnology and SRE: Pioneering Precision in Performance

Nanotechnology offers the opportunity to transform SRE at the atomic level — addressing individual tasks, subtasks, and tickets. For example, extra-sensitive nanosensors can continuously monitor system performance metrics, including temperature, voltage, and processing load. When placed in data centers, these sensors enable real-time data collection and analysis, detecting electrical and mechanical issues before they escalate and extending the lifespan of technological components. Nanobots can be programmed to address hardware issues and routine maintenance tasks. Together, these technologies can integrate into a self-healing and continuously improving system in line with SRE principles. ... Nanotechnology can potentially transform SRE, leading to enhanced system reliability and performance. Nanotechnology-enabled solutions can allow more precise monitoring, optimization, and real-time improvements, supporting the key pillars of SRE. At the same time, the foundational principles of SRE can be applied to ensure the reliability of advanced nanotechnology systems. 


Three Areas Where AI Can Make a Huge Difference Without Significant Job Risk

Doing a QC job can be annoying because even though the job is critical to the outcome, your non-QC peers and management treat you like a potentially avoidable annoyance. You stand in the way of shipping on time and at volume, potentially delaying or even eliminating performance-based bonuses. We are already discovering that to assure the quality of an AI-driven coding effort, a second AI is needed to assure the quality of the result because people just don’t like doing QC on code, particularly those who create it. ... In short, properly applied AI could highlight and help address problems that are critically reducing a company’s ability to perform to its full potential and preventing it from becoming a great place to work. ... Calculating an employee’s contribution and then using it to set compensation transparently should significantly reduce the number of employees who feel they are being treated unfairly by eliminating that unfairness or by showing them a path to improve their value and thus positively impact their pay.



Quote for the day:

"When you stop chasing the wrong things you give the right things a chance to catch you." -- Lolly Daskal