Daily Tech Digest - September 18, 2024

Putting Threat Modeling Into Practice: A Guide for Business Leaders

One of the primary benefits of threat modeling is its ability to reduce the number of defects that make it to production. By identifying potential threats and vulnerabilities during the design phase, companies can implement security measures that prevent these issues from ever reaching the production environment. This proactive approach not only improves the quality of products but also reduces the costs associated with post-production fixes and patches. ... Along similar lines, threat modeling can help meet obligations defined in contracts if those contracts include terms related to risk identification and management. ... Beyond obligations linked to compliance and contracts, many businesses also establish internal IT security goals. For example, they might seek to configure access controls based on the principle of least privilege or enforce zero-trust policies on their networks. Threat modeling can help to put these policies into practice by allowing organizations to identify where their risks actually lie. From this perspective, threat modeling is a practice that the IT organization can embrace because it helps achieve larger goals – namely, those related to internal governance and security strategy.


How Cloud Custodian conquered cloud resource management

Everybody knows the cloud bill is basically rate multiplied by usage. But while most enterprises have a handle on rate, usage is the hard part. You have different application teams provisioning infrastructure. You go through code reviews. Then when you get to five to 10 applications, you get past the point where anyone can possibly know all the components. Now you have containerized workloads on top of more complex microservices architectures. And you want to be able to allow a combination of cathedral (control) and bazaar (freedom of technology choice) governance, especially today with AI and all of the new frameworks and LLMs [large language models]. At a certain point you lose the script to be able to follow all of this in your head. There are a lot of tools to enable that understanding — architectural views, network service maps, monitoring tools — all feeling out different parts of the elephant versus giving an organization a holistic view. They need to know not only what’s in their cloud environment, but what’s being used, what’s conforming to policy, and what needs to be fixed, and how. That’s what Cloud Custodian is for — so you can define the organizational requirements of your applications and map those up against cloud resources as policy.


5 Steps to Identify and Address Incident Response Gaps

To compress the time it takes to address an incident, it’s not enough to stick to the traditional eyes-on-glass model that network operations centers (NOCs) traditionally privilege. It’s too human-intensive and error-prone to effectively triage an increasingly overwhelming volume of data. To go from event to resolution with minimal toil and increased speed, teams can leverage AI and automation to deflect noise, surface only the most critical alerts and automate diagnostics and remediations. Generative AI can amplify that effect: For teams collaborating in ChatOps tools, common diagnostic questions can be used as prompts to get context and accelerate action. ... When an incident hits, teams spend too much time gathering information and looping in numerous people to tackle it. Generative AI can be used to quickly summarize key data about the incident and provide actionable insights at every step of the incident life cycle. It can also supercharge the ability to develop and deploy automation jobs faster, even by non-technical teams: Operators can translate conversational prompts into proposed runbook automation or leverage pre-engineered prompts based on common categories.


DevOps with OpenShift Pipelines and OpenShift GitOps

Unlike some other CI solutions, such as legacy tool Jenkins, Pipelines is built on native Kubernetes technologies and thus is resource efficient since pipelines and tasks are only actively running when needed. Once the pipeline has completed no resources are consumed by the pipeline itself. Pipelines and tasks are constructed using a declarative approach following standard Kubernetes practices. However, OpenShift Pipelines includes a user-friendly interface built into the OpenShift console that enables users to easily monitor the execution of the pipelines and view task logs as needed. The user interface also shows metrics for individual task execution, enabling users to better optimize pipeline performance. In addition, the user interface enables users to quickly create and modify pipelines visually. While users are encouraged to store tasks and Pipeline resources in Git, the ability to visually create and modify pipelines greatly reduces the learning curve and makes the technology approachable for new users. You can leverage pipelines-as-code to provide an experience that is tightly integrated with your backend Git provider, such as GitHub or GitLab. 


Rethinking enterprise architects’ roles for agile transformation

Mounting technical debt and extending the life of legacy systems are key risks CIOs should be paranoid about. The question is, how should CIOs assign ownership to this problem, require that technical debt’s risks are categorized, and ensure there’s a roadmap for implementing remediations? One solution is to assign the responsibility to enterprise architects in a product management capacity. Product managers must define a vision statement that aligns with strategic and end-user needs, propose prioritized roadmaps, and oversee an agile backlog for agile delivery teams. ... Enterprise architects who have a software development background are ideal candidates to assume the delivery leader role and can steer teams toward developing platforms with baked-in security, performance, usability, and other best practices. ... Enterprise architects assuming a sponsorship role in these initiatives can help steer them toward force-multiplying transformations that reduce risks and provide additional benefits in improved experiences and better decision-making. CIOs who want enterprise architects to act as sponsors should provide them with a budget and oversee the development of a charter for managing investment priorities.


The best way to regulate AI might be not to specifically regulate AI. This is why

Most of the potential uses of AI are already covered by existing rules and regulations designed to do things such as protect consumers, protect privacy and outlaw discrimination. These laws are far from perfect, but where they are not perfect the best approach is to fix or extend them rather than introduce special extra rules for AI. AI can certainly raise challenges for the laws we have – for example, by making it easier to mislead consumers or to apply algorithms that help businesses to collude on prices. ... Finally, there’s a lot to be said for becoming an international “regulation taker”. Other jurisdictions such as the European Union are leading the way in designing AI-specific regulations. Product developers worldwide, including those in Australia, will need to meet those new rules if they want to access the EU and those other big markets. If Australia developed its own idiosyncratic AI-specific rules, developers might ignore our relatively small market and go elsewhere. This means that, in those limited situations where AI-specific regulation is needed, the starting point ought to be the overseas rules that already exist. There’s an advantage in being a late or last mover. 


How LLMs on the Edge Could Help Solve the AI Data Center Problem

Anyone interacting with an LLM in the cloud is potentially exposing the organization to privacy questions and the potential for a cybersecurity breach. As more queries and prompts are being done outside the enterprise, there are going to be questions about who has access to that data. After all, users are asking AI systems all sorts of questions about their health, finances, and businesses. To do so, these users often enter personally identifiable information (PII), sensitive healthcare data, customer information, or even corporate secrets. The move toward smaller LLMs that can either be contained within the enterprise data center – and thus not running in the cloud – or that can run on local devices is a way to bypass many of the ongoing security and privacy concerns posed by broad usage of LLMs such as ChatGPT. ... Pruning the models to reach a more manageable number of parameters is one obvious way to make them more feasible on the edge. Further, developers are shifting the GenAI model from the GPU to the CPU, reducing the processing footprint, and building standards for compiling. As well as the smartphone applications noted above, the use cases that lead the way will be those that are achievable despite limited connectivity and bandwidth, according to Goetz.


'Good complexity' can make hospital networks more cybersecure

Because complicated systems have structures, Tanriverdi says, it's difficult but feasible to predict and control what they'll do. That's not feasible for complex systems, with their unstructured connections. Tanriverdi found that as health care systems got more complex, they became more vulnerable. ... The problem, he says, is that such systems offer more data transfer points for hackers to attack, and more opportunities for human users to make security errors. He found similar vulnerabilities with other forms of complexity, including:Many different types of medical services handling health data. Decentralizing strategic decisions to member hospitals instead of making them at the corporate center. The researchers also proposed a solution: building enterprise-wide data governance platforms, such as centralized data warehouses, to manage data sharing among diverse systems. Such platforms would convert dissimilar data types into common ones, structure data flows, and standardize security configurations. "They would transform a complex system into a complicated system," he says. By simplifying the system, they would further lower its level of complication.


Threats by Remote Execution and Activating Sleeper Devices in the Context of IoT and Connected Devices

As the Internet of Things proliferates, the number of connected devices in both civilian and military contexts is increasing exponentially. From smart homes to military-grade equipment, the IoT ecosystem connects billions of devices, all of which can potentially be exploited by adversaries. The pagers in the Hezbollah case, though low-tech compared to modern IoT devices, represent the vulnerability of a system where devices are remotely controllable. In the IoT realm, the stakes are even higher, as everyday devices like smart thermostats, security cameras, and industrial equipment are interconnected and potentially exploitable. In a modern context, this vulnerability could be magnified when applied to smart cities, critical infrastructure, and defense systems. If devices such as power grids, water systems, or transportation networks are connected to the internet, they could be subjected to remote control by malicious actors. ... One of the most alarming aspects of this situation is the suspected infiltration of the supply chain. The pagers used by Hezbollah were reportedly tampered with before being delivered to the group, likely with explosives embedded within the devices.


Detecting vulnerable code in software dependencies is more complex than it seems

A “phantom dependency” refers to a package used in your code that isn’t declared in the manifest. This concept is not unique to any one language (it’s common in JavaScript, NodeJS, and Python). This is problematic because you can’t secure what you can’t see. Traditional SCA solutions focus on manifest files to identify all application dependencies, but those can both be under- or over-representative of the dependencies actually used by the application. They can be under-representative if the analysis starts from a manifest file that only contains a subset of dependencies, e.g., when additional dependencies are installed in a manual, scripted or dynamic fashion. This can happen in Python ML/AI applications, for example, where the choice of packages and versions often depend on operating systems or hardware architectures, which cannot be fully expressed by dependency constraints in manifest files. And they are over-representative if they contain dependencies not actually used. This happens, for example, if you dump the names of all the components contained in a bloated runtime environment into a manifest file



Quote for the day:

"An accountant makes you aware but a leader makes you accountable." -- Henry Cloud

Daily Tech Digest - September 17, 2024

Dedicated Cloud: What It’s For and How It’s Different From Public Cloud

While dedicated cloud services give you a level of architectural control you will not get from public clouds, using them comes with trade-offs, the biggest one being the amount of infrastructure engineering ability needed. But if your team has concluded that a public cloud isn’t a good fit, you probably know that already and have at least some of that ability on hand. ... Ultimately, dedicated cloud is about keeping control and giving yourself options. You can quickly deploy different combinations of resources, interconnecting dedicated infrastructure with public cloud services, and keep fine-tuning and refining as you go. You get full control of your data and your architecture with the freedom to change your mind. The trade-off is that you must be ready to roll up your sleeves and manage operating systems, deploy storage servers, tinker with traffic routing and do whatever else you need to do to get your architecture just right. But again, if you already know that you need more knobs than you can turn using a typical public cloud provider, you are probably ready anyway.


Building a More Sustainable Data Center: Challenges and Opportunities in the AI Era

Sustainability is not just a compliance exercise on reducing the negative impact on the environment, it also can bring financial benefits to an organization. According to Gartner’s Unlock the Business Benefits of Sustainable IT Infrastructure report, “[Infrastructure and operations’] contribution to sustainability strategies tends to focus on environmental impact, but sustainability also can have a significant positive impact on non-environmental factors, such as brand, innovation, resilience and attracting talent.” As a result, boards should embrace the financial opportunities of companies’ Environmental, Sustainability, and Governance (ESG) compliance rather than consider it just another unavoidable compliance expense without a discernable return on investment (ROI). ... To improve data center resilience, Gartner recommends that organizations expand use of renewable energy using a long-term power purchase agreement to contain costs, generate their own power where feasible, and reuse and redeploy equipment as much as possible to maximize the value of the resource.


Data Business Evaluation

Why data businesses? Because they can be phenomenal businesses with extremely high gross margins — as good or better than software-as-a-service (SaaS). Often data businesses can be the best businesses within the industries that they serve. ... Data aggregation can be a valuable way to assemble a data asset as well, but the value typically hinges on the difficulty of assembling the data…if it is too easy to do, others will do it as well and create price competition. Often the value comes in aggregating a long tail of data that is costly to do more than once either for the suppliers or a competitive aggregator. ... The most stable data businesses tend to employ a subscription business model in which customers subscribe to a data set for an extended period of time. Subscriptions models are clearly better when the subscriptions are long term or, at least, auto-renewing. Not surprisingly, the best data businesses are generally syndicated subscription models. On the other end, custom data businesses that produce data for clients in a one-off or project-based manner generally struggle to attain high margins and predictability, but can be solid businesses if the data manufacturing processes are optimized 


Leveraging AI for water management

AI is reshaping the landscape of water management by providing predictive insights, optimising operations, and enabling real-time decision-making. One of AI’s key contributions is its ability to forecast water usage patterns. AI models can accurately predict water demand by analysing historical data and considering variables like weather conditions, population trends, and industrial activities. This helps water utilities allocate resources more effectively, minimising waste while ensuring consistent supply to communities. Water utilities can also integrate AI systems to monitor and optimise their supply networks. ... One of the most critical applications of AI is in water quality monitoring. Traditional methods of detecting water contaminants are labour-intensive and involve periodic testing, which can result in delayed responses to contamination events. AI, on the other hand, can process continuous data streams from IoT-enabled sensors installed in water distribution systems. These sensors monitor variables like pH levels, temperature, and turbidity, detecting changes in water quality in real time. AI algorithms analyse the data, triggering immediate alerts when contaminants or irregularities are detected.


History of Cybersecurity: Key Changes Since the 1990s and Lessons for Today

Most cyber attackers hadn’t considered using the internet to pursue financial gain or cause serious harm to organizations. To be sure, financial crimes based on computer hacking took place in the '90s and early 2000s. But they didn't dominate the news in an endless stream of cautionary tales, and most people thought the 1995 movie Hackers was a realistic depiction of how hacking worked. ... By the mid-2000s, however, internet-based attacks became more harmful and frequent. This was the era when threat actors realized they could build massive botnets and then use them to distribute spam or send scam emails. These attacks could have caused real financial harm, but they weren't exactly original types of criminal activity. They merely conducted traditional criminal activity, like scams, using a new medium: the internet. ... The 2010s were also a time of massive technological change. The advent of cloud computing, widespread adoption of mobile devices, and rollout of Internet of Things (IoT) hardware meant businesses could no longer define clear network perimeters or ensure that sensitive data always remained in their data centers. 


Gateways to havoc: Overprivileged dormant service accounts

Dormant accounts go unnoticed, leaving organizations unaware of their access privileges, the systems they connect to, how to access them, and even of their purpose of existence. Their elevated privileges, lax security measures, and invisibility, make dormant service accounts prime targets for infiltration. By compromising such an account, attackers can gain significant access to systems and sensitive data, often without raising immediate suspicion for extended periods of time. During that time, cyber criminals can elevate privileges, exfiltrate data, disrupt operations, and install malware and backdoors, causing total mayhem completely undetected until it’s too late. The weaknesses that plague dormant accounts make them open doors into an organization’s system. If compromised, an overprivileged dormant account can give way to sensitive data such as customer PII, PHI, intellectual property, and financial records, leading to costly and damaging data breaches. Even without being breached, dormant accounts are significant liabilities, potentially causing operational disruptions and regulatory compliance violations.


Overcoming AI hallucinations with RAG and knowledge graphs

One challenge that has come up in deploying RAG into production environments is that it does not handle searches across lots of documents that contain similar or identical information. When these files are chunked and turned into vector embeddings, each one will have its data available for searching. When each of those files has very similar chunks, finding the right data to match that request is harder. RAG can also struggle when the answer to a query exists across a number of documents that cross reference each other. RAG is not aware of the relationships between these documents. ... Rather than storing data in rows and columns for traditional searches, or as embeddings for vector search, a knowledge graph represents data points as nodes and edges. A node will be a distinct fact or characteristic, and edges will connect all the nodes that have relevant relationships to that fact. In the example of a product catalog, the nodes may be the individual products while the edges will be similar characteristics that each of those products possess, like size or color.


Preparing for the next big cyber threat

In addressing emerging threats, CISOs will have to incorporate controls to counter adversarial AI tactics and foster synergies with data and AI governance teams. Controls to ensure quantum-resistant cryptography in the symmetric space to future-proof encrypted data and transmissions will also be put in place if they are not already. Many organizations — including banks — are already enforcing the use of quantum-resistant cryptography, for instance, with the use of the Advanced Encryption Standard (AES)-256 algorithm because data encrypted by it is not vulnerable to cracking by quantum computers. Zero trust as a mindset and approach will be very important, especially in addressing insecure design components of OT environments used in Industry 4.0. Therefore, one of the key areas of strengthening protection would also be identity and access management (IAM). ... As part of strong cyber resilience, we need sound IR playbooks to effectively draw bridges, we need plan Bs and plan Cs, business continuities as well as table-tops and red teams that involve our supply chain vendors. And finally, response to the ever-evolving threat landscape will entail greater adaptability and agility.


The Impact of AI on The Ethernet Switch Market

Enterprises investing in new infrastructure to support AI will have to choose which technology is best for their particular needs. InfiniBand and Ethernet will likely continue to coexist for the foreseeable future. It’s highly likely that Ethernet will remain dominant in most network environments while InfiniBand will retain its foothold in high-performance computing and specialized AI workloads. ... While InfiniBand has several very strong advantages, advances in Ethernet are quickly closing the gap, making its ubiquity likely to continue. There are multiple other reasons that enterprises are likely to stick with Ethernet, too, such as lower cost, existing in-house talent, prolific integrations with existing infrastructures, and compatibility with legacy applications, among others. ... The Ultra Ethernet Consortium is proactively working to extend Ethernet's life to ensure it remains useful and cost-effective for both current and future technologies. The aim is primarily to reduce the need for drastic shifts to alternative solutions that may constitute heavy lifts and costs in adapting existing networks. 


Making the Complex Simple: Authorization for the Modern Enterprise

Modernizing legacy authorization systems is essential for organizations to enhance security and support their growth and innovation. Modernizing and automating operations allows organizations to overcome the limitations of legacy systems, enhance the protection of sensitive information and stay competitive in today’s digital landscape. Simplifying access control and automating workflows to modernize and optimize operations greatly increases productivity and lowers administrative burdens. Organizations can direct important resources toward more strategic endeavors by automating repetitive operations, which increases output and promotes an agile corporate environment. This change improves operational efficiency and puts businesses in a better position to adapt to changing market demands. Enhancing security is another critical benefit of modernizing authorization systems. Centralized management coupled with advanced role-based access control (RBAC) strengthens an organization’s security posture by preventing unauthorized access. Centralized systems allow for efficient user permissions management, ensuring that only authorized individuals can access sensitive information. 



Quote for the day:

"Motivation will almost always beat mere talent." -- Ralph Augustine Norman

Daily Tech Digest - September 16, 2024

AI Ethics – Part I: Guiding Principles for Enterprise

The world has now caught up to what was previously science fiction. We are now designing AI that is in some ways far more advanced than anything Isaac Asimov could have imagined, while at the same time being far more limited. Even though they were originally conceived as fictional principles, there have been efforts to adapt and enhance Isaac Asimov’s Three Laws of Robotics to fit modern enterprise AI-based solutions. Here are some notable examples: Human-Centric AI Principles - Modern AI ethics frameworks often emphasize human safety and well-being, echoing Asimov’s First Law. ... Ethical AI Guidelines - Enterprises are increasingly developing ethical guidelines for AI that align with Asimov’s Second Law. These guidelines ensure that AI systems obey human instructions while prioritizing ethical considerations. ... Bias Mitigation and Fairness - In line with Asimov’s Third Law, there is a strong focus on protecting the integrity of AI systems. This includes efforts to mitigate biases and ensure fairness in AI outputs. ... Enhanced Ethical Frameworks - Some modern adaptations include additional principles, such as the “Zeroth Law,” which prioritizes humanity’s overall well-being. 


Power of Neurodiversity: Why Software Needs a Revolution

Neurodiversity, which includes ADHD, autism spectrum disorder, and dyslexia, presents unique challenges for individuals, yet it also comes with many unique strengths. People on the autism spectrum often excel in logical thinking, while individuals with ADHD can demonstrate exceptional attention to detail when engaged in areas of interest. Those with dyslexia frequently display creative thinking skills. However, software design often fails to accommodate neurodiverse users. For example, websites or apps with cluttered interfaces can overwhelm users with ADHD, while those sites that rely heavily on text make it harder for individuals with dyslexia to process information. Additionally, certain sounds or colors, such as bright colors, may be overwhelming for someone with autism. Users do not have to adapt to poorly designed software. Instead, software designers must create products designed to meet these user needs. Waiting to receive software accessibility training on the job may be too late, as software designers and developers will need to relearn foundational skills. Moreover, accessibility still does not seem to be a priority in the workplace, with most job postings for relevant positions not requiring these skills.


Protect Your Codebase: The Importance of Provenance

When you know that provenance is a vector for a software supply chain attack, you can take action to protect it. The first step is to collect the provenance data for your dependencies, where it exists; projects that meet SLSA level 1 or higher produce provenance data you can inspect and verify. Ensure that trusted identities generate provenance. If you can prove that provenance data came from a system you own and secured or from a known good actor, it’s easier to trust. Cryptographic signing of provenance records provides assurance that the record was produced by a verifiable entity — either a person or a system with the appropriate cryptographic key. Store provenance data in a write-once repository. This allows you to verify later if any provenance data was modified. Modification, whether malicious or accidental, is a warning sign that your dependencies have been tampered with somehow. It’s also important to protect the provenance you produce for yourself and any downstream users. Implement strict access and authentication controls to ensure only authorized users can modify provenance records. 


Are You Technical or Non-Technical? Time to Reframe the Discussion

The term “technical” can introduce bias into hiring and career development, potentially leading to decisions swayed more by perception than by a candidate’s qualifications. Here, hiring decisions can sometimes reflect personal biases if candidates do not fit a stereotypical image or lack certain qualifications not essential for the role. For instance, a candidate might be viewed as not technical enough if they lack server administration experience, even when the job primarily involves software development. Unconscious bias can skew evaluations, leading to decisions based more on perceptions than actual skills. To address this issue, it is important to clearly define the skills required for a position. For example, rather than broadly labeling a candidate as “not technical enough,” it is more effective to specify areas for improvement, such as “needs advanced database management skills.” This approach not only highlights areas where candidates excel, such as developing user-centric reports, but also clarifies specific shortcomings. Clearly stating requirements, such as “requires experience building scalable applications with technology Y,” enhances the transparency and objectivity of the hiring process.


Will Future AI Demands Derail Sustainable Energy Initiatives?

The single biggest thing enterprises are doing to address energy concerns is moving toward more energy efficient second-generation chips, says Duncan Stewart, a research director with advisory firm Deloitte Technology, via email. "These chips are a bit faster at accelerating training and inference -- about 25% better than first-gen chips -- and their efficiency is almost triple that of first-generation chips." He adds that almost every chipmaker is now targeting efficiency as the most important chip feature In the meantime, developers will continue to play a key role in optimizing AI energy needs, as well as validating whether AI is even required to achieve a particular outcome. "For example, do we need to use a large language model that requires lots of computing power to generate an answer from enormous data sets, or can we use more narrow and applied techniques, like predictive models that require much less computing because they’ve been trained on much more specific and relevant data sets?" Warburton asks. "Can we utilize compute instances that are powered by low-carbon electricity sources?


When your cloud strategy is ‘it depends’

As for their use of private cloud, some of the rationale is purely a cost calculation. For some workloads, it’s cheaper to run on premises. “The cloud is not cheaper. That’s a myth,” one of the IT execs told me, while acknowledging cost wasn’t their primary reason for embracing cloud anyway. I’ve been noting this for well over a decade. Convenience, not cost, tends to drive cloud spend—and leads to a great deal of cloud sprawl, as Osterman Research has found. ... You want developers, architects, and others to feel confident with new technology. You want to turn them into allies, not holdouts. Jassy declared, “Most of the big initial challenges of transforming the cloud are not technical” but rather “about leadership—executive leadership.” That’s only half true. It’s true that developers thrive when they have executive air cover. This support makes it easier for them to embrace a future they likely already want. But they also need that executive support to include time and resources to learn the technologies and techniques necessary for executing that new direction. If you want your company to embrace new directions faster, whether cloud or AI or whatever it may be, make it safe for them to learn. 


4 steps to shift from outputs to outcomes

Shifting the focus to outcomes — business results aligned with strategic goals — was the key to unlocking value. David had to teach his teams to see the bigger picture of their business impact. By doing this, every project became a lever to achieve revenue growth, cost savings, and customer satisfaction, rather than just another task list. Simply being busy doesn’t mean a project is successful in delivering business value, yet many teams proudly wear busy badges, leaving executives wondering why results aren’t materializing. Busy doesn’t equal productive. In fact, busy gets in the way of being productive. ... A common issue is project teams lose sight of how their work aligns with the company’s broader goals. When David took over, his teams were still disconnected from those strategic objectives, but by revisiting them and ensuring that every project directly supported those goals, the teams could finally see they were part of something much larger than just a list of tasks. Many business leaders think their teams are mind readers. They hold a town hall, send out a slide deck, and then expect everyone to get it. But months later, they’re surprised when the strategy starts slipping through their fingers.


Is Your Business Ready For The Inevitable Cyberattack?

Cybersecurity threats are inevitable, making it essential for businesses to prepare for the worst. The critical question is: if your business is hacked, is your data protected, and can you recover it in hours rather than days or weeks? If not, you are leaving your business vulnerable to severe disruptions. While everyone emphasises the importance of backups, the real challenge lies in ensuring their integrity and recoverability. Are your backups clean? Can you quickly restore data without prolonged downtime? The total cost of ownership (TCO) of your data protection strategy over time is a crucial consideration. Traditional methods, such as relying on Iron Mountain for physical backups, are cumbersome and time-bound, requiring significant effort to locate and restore data. ... The story of data storage, much like the shift to cloud computing, revolves around strategically placing the right parts of your business operations in the most suitable locations at the right times. Data protection follows the same principle. Resilience is still a topic of frequent discussion, yet its broad nature makes it challenging to establish a clear set of best practices.


Digital twin in the hospitality industry-Innovations in planning & designing a hotel

The Metaverse is revolutionising how it became a factual virtual reality tour of rooms and services experienced by guests during their visit, for which the guest is provided the chance to preview before booking. Moreover, hotels can provide tailored virtual experiences through interactive concierge services and bespoke room décor options. More events will be held through immersive games and entertaining interactivity, bringing better visitor experiences to the hospitality industry. It can generate revenue through tickets, sponsorships, and virtual item sales. ... Operational efficiency is the bottom line of hospitality, where everything seems small but matters so much for guest satisfaction. Imagine the case where the HVAC system of a hotel or its lighting is controlled by some model of a digital twin. Managers will thus understand the energy consumption patterns and predict what will require maintenance so they can change those settings accordingly based on real-time data. Digital twins enable staff and resources to be trained better. Staff can be comfortable with changes in procedures and layout beforehand by interacting with the virtual model. 


The cybersecurity paradigm shift: AI is necessitating the need to fight fire with fire

Organisations should be prepared for the worst-case scenario of a cyber-attack to establish cyber resilience. This involves being able to protect and secure data, detect cyber threats and attacks, and respond with automated data recovery processes. Each element is critical to ensuring an organization can maintain operational integrity under attack. ... However, the reality is that many organisations are unable to keep up. From the company's recent survey released in late January 2024, 79% of IT and security decision-makers said they did not have full confidence in their company’s cyber resilience strategy. Just 12% said their data security, management, and recovery capabilities had been stress tested in the six months prior to being surveyed. ... To bolster cyber resilience, companies must integrate a robust combination of people, processes, and technology. Fostering a skilled workforce equipped to detect and respond to threats effectively starts with having employee education and training in place to keep pace with the rising sophistication of AI-driven phishing attacks.



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - September 15, 2024

Data Lakes Evolve: Divisive Architecture Fuels New Era of AI Analytics

“Data lakes led to the spectacular failure of big data. You couldn’t find anything when they first came out,” Sanjeev Mohan, principal at the SanjMo tech consultancy, told Data Center Knowledge. There was no governance or security, he said. What was needed were guardrails, Mohan explained. That meant safeguarding data from unauthorized access and respecting governance standards such as GDPR. It meant applying metadata techniques to identify data. “The main need is security. That calls for fine-grained access control – not just throwing files into a data lake,” he said, adding that better data lake approaches can now address this issue. Now, different personas in an organization are reflected in different permissions settings. ... This type of control was not standard with early data lakes, which were primarily “append-only” systems that were difficult to update. New table formats changed this. Table formats like Delta Lake, Iceberg, and Hudi have emerged in recent years, introducing significant improvements in data update support. For his part, Sanjeev Mohan said standardization and wide availability of tools like Iceberg give end-users more leverage when selecting systems. 


Data at the Heart of Digital Transformation: IATA's Story

It's always good to know what the business goals are, from a strategic perspective, which informs the data that is needed to enable digital transformation. Data is at the heart of digital transformation. Business strategy comes first and then data strategy, followed by technology strategy. At IATA, we formed the Data Steering Group and identified critical datasets across the organization. We then set up a data catalog and established a governance structure. This was followed by the launch of the Data Governance Committee and the role of a chief data officer. We're going to be implementing an automated data catalog and some automation tools around data quality. Data governance has allowed us to break down data silos. It has also enabled us to establish IATA's industry data strategy. We treat data as an asset, and that data is not owned by any particular division but looked at holistically at the organizational level. And that has allowed us opportunities to do some exciting things in the AI and analytics space and even in the way we deal with our third-party data suppliers and member airlines.


New Android Warning As Hackers Install Backdoor On 1.3 Million TV Boxes

"This is a clear example of how IoT devices can be exploited by malicious actors,” Ray Kelly, fellow at the Synopsys Software Integrity Group, said, “the ability of the malware to download arbitrary apps opens the door to a range of potential threats.” Everything from a TV box botnet for use in distributed denial of service attacks through to stealing account credentials and personal information. Responsibility for protecting users lies with the manufacturers, Kelly said, they must “ensure their products are thoroughly tested for security vulnerabilities and receive regular software updates.” "These off-brand devices discovered to be infected were not Play Protect certified Android devices,” a Google spokesperson said, “If a device isn't Play Protect certified, Google doesn’t have a record of security and compatibility test results.” Whereas these Play Protect certified devices have undergone testing to ensure both quality and user safety, other boxes may not have done. “To help you confirm whether or not a device is built with Android TV OS and Play Protect certified, our Android TV website provides the most up-to-date list of partners,” the spokesperson said.


Engineers Day: Top 5 AI-powered roles every engineering graduate should consider

Generative AI engineer: They play a pivotal role in analysing vast datasets to extract actionable insights and drive data-informed decision-making processes. This role demands a comprehensive understanding of statistical analysis, machine learning techniques, and programming languages such as Python and R. ... AI research scientist: They are at the forefront of advancing AI technologies through groundbreaking research and innovation. With a robust mathematical background, professionals in this role delve into programming languages such as Python and C++, harnessing the power of deep learning, natural language processing, and computer vision to develop cutting-edge solutions. ... Machine Learning engineer: Machine learning engineers are tasked with developing cutting-edge machine learning models and algorithms to address complex problems across various industries. To excel in this role, professionals must develop a strong proficiency in programming languages such as Python, along with a deep understanding of machine learning frameworks like TensorFlow and PyTorch. Expertise in data preprocessing techniques and algorithm development is also quite crucial here. 


Kubernetes attacks are growing: Why real-time threat detection is the answer for enterprises

Attackers are ruthless in pursuing the weakest threat surface of an attack vector, and with Kubernetes containers runtime is becoming a favorite target. That’s because containers are live and processing workloads during the runtime phase, making it possible to exploit misconfigurations, privilege escalations or unpatched vulnerabilities. This phase is particularly attractive for crypto-mining operations where attackers hijack computing resources to mine cryptocurrency. “One of our customers saw 42 attempts to initiate crypto-mining in their Kubernetes environment. Our system identified and blocked all of them instantly,” Gil told VentureBeat. Additionally, large-scale attacks, such as identity theft and data breaches, often begin once attackers gain unauthorized access during runtime where sensitive information is used and thus more exposed. Based on the threats and attack attempts CAST AI saw in the wild and across their customer base, they launched their Kubernetes Security Posture Management (KSPM) solution this week. What is noteworthy about their approach is how it enables DevOps operations to detect and automatically remediate security threats in real-time. 


Begun, the open source AI wars have

Open source leader julia ferraioli agrees: "The Open Source AI Definition in its current draft dilutes the very definition of what it means to be open source. I am absolutely astounded that more proponents of open source do not see this very real, looming risk." AWS principal open source technical strategist Tom Callaway said before the latest draft appeared: "It is my strong belief (and the belief of many, many others in open source) that the current Open Source AI Definition does not accurately ensure that AI systems preserve the unrestricted rights of users to run, copy, distribute, study, change, and improve them." ... Afterwards, in a more sorrowful than angry statement, Callaway wrote: "I am deeply disappointed in the OSI's decision to choose a flawed definition. I had hoped they would be capable of being aspirational. Instead, we get the same excuses and the same compromises wrapped in a facade of an open process." Chris Short, an AWS senior developer advocate, Open Source Strategy & Marketing, agreed. He responded to Callaway that he: "100 percent believe in my soul that adopting this definition is not in the best interests of not only OSI but open source at large will get completely diluted."


What North Korea’s infiltration into American IT says about hiring

Agents working for the North Korean government use stolen identities of US citizens, create convincing resumes with generative AI (genAI) tools, and make AI-generated photos for their online profiles. Using VPNs and proxy servers to mask their actual locations — and maintaining laptop farms run by US-based intermediaries to create the illusion of domestic IP addresses — the perpetrators use either Western-based employees for online video interviews or, less successfully, real-time deepfake videoconferencing tools. And they even offer up mailing addresses for receiving paychecks. ... Among her assigned tasks, Chapman maintained a PC farm of computers used to simulate a US location for all the “workers.” She also helped launder money paid as salaries. The group even tried to get contractor positions at US Immigration and Customs Enforcement and the Federal Protective Services. (They failed because of those agencies’ fingerprinting requirements.) They did manage to land a job at the General Services Administration, but the “employee” was fired after the first meeting. A Clearwater, FL IT security company called KnowBe4 hired a man named “Kyle” in July. But it turns out that the picture he posted on his LinkedIn account was a stock photo altered with AI. 


Contesting AI Safety

The dangers posed by these machines arise from the idea that they “transcend some of the limitations of their designers.” Even if rampant automation and unpredictable machine behavior may destroy us, the same technology promises unimaginable benefits in the far future. Ahmed et al. describe this epistemic culture of AI safety that drives much of today’s research and policymaking, focused primarily on the technical problem of aligning AI. This culture traces back to the cybernetics and transhumanist movements. In this community, AI safety is understood in terms of existential risks—unlikely but highly impactful events, such as human extinction. The inherent conflict between a promised utopia and cataclysmic ruin characterizes this predominant vision for AI safety. Both the AI Bill of Rights and SB 1047 assert claims about what constitutes a safe AI model but fundamentally disagree on the definition of safety. A model deemed safe under SB 1047 might not satisfy the Safe and Effective principle of the White House AI Blueprint; a model that follows the AI Blueprint could cause critical harm. What does it truly mean for AI to be safe? 


Why Companies Should Embrace Ethical Hackers

Security researchers (or hackers, take your pick) are generally good people motivated by curiosity, not malicious intent. Making guesses, taking chances, learning new things, and trying and failing and trying again is fun. The love of the game and ethical principles are two separate things, but many researchers have both in spades. Unfortunately, the government has historically sided with corporations. Scared by the Matthew Broderick movie WarGames plot, Ronald Reagan initiated legislation that resulted in the Computer Fraud and Abuse Act of 1986 (CFAA). Good-faith researchers have been haunted ever since. Then there is The Digital Millennium Copyright Act (DMCA) of 1998, which made it explicitly illegal to “circumvent a technological measure that effectively controls access to a work protected under [copyright law],” something necessary to study many products. A narrow harbor for those engaging in encryption research was carved out in the DMCA, but otherwise, the law put researchers further in danger of legal action against them. All this naturally had a chilling effect as researchers grew tired of being abused for doing the right thing. Many researchers stopped bothering with private disclosures to companies with vulnerable products and took their findings straight to the public. 


Why AI Isn't Just Hype - But A Pragmatic Approach Is Required

It is far better to take a pragmatic view where you open yourself up to the possibilities but proceed with both caution and some help. That must start with working through the buzzwords and trying to understand what people mean, at least at a top level, by an LLM or a vector search or maybe even a Naive Bayes algorithm. But then, it is also important to bring in a trusted partner to help you move to the next stage to build an amazing new digital product, or to undergo a digital transformation with an existing digital product. Whether you’re in start-up mode, you are already a scale-up with a new idea, or you’re a corporate innovator looking to diversify with a new product – whatever the case, you don’t want to waste time learning on the job, and instead want to work with a small, focused team who can deliver exceptional results at the speed of modern digital business. ... Whatever happens or doesn’t happen to GenAI, as an enterprise CIO you are still going to want to be looking for tech that can learn and adapt from circumstance and so help you do the same. At the end of the day, hype cycle or not, AI is really the one tool in the toolbox that can continuously work with you to analyse data in the wild and in non-trivial amounts.



Quote for the day:

"Your attitude is either the lock on or key to your door of success." -- Denis Waitley

Daily Tech Digest - September 14, 2024

Three Critical Factors for a Successful Digital Transformation Strategy

Just as important as the front-end experience are the back-end operations that keep and build the customer relationship. Value-added digital services that deliver back-end operational excellence can improve the customer experience through better customer service, improved security and more. Emerging tech like artificial intelligence can substantially improve how companies get a clearer view into their operations and customer base. Take data flow and management, for example. Many executives report they are swimming in information, yet around half admit they struggle analyzing it, according to research by Paynearme. While data is important, the insights derived from that data are key to the conclusions executives must draw. Maintaining a digital record of customer information, transaction history, spend behaviors and other metrics and applying AI to analyze and inform decisions can help companies provide better service and protect their end users. They can streamline customer service, for instance, by immediately sourcing relevant information and delivering a resolution in near-real time, or by automating the analysis of spend behavior and location data to shut down potential fraudsters.


AI reshaping the management of remote workforce

In a remote work setting, one of the biggest challenges for organizations remains in streamlining of operations. For a scattered team, the implementation of AI emerges as a revolutionary tool in automating shift and rostering using historical pattern analytics. Historical data on staff availability, productivity, and work patterns enable organizations to optimise schedules and strike a perfect balance between operational needs and employee preferences. Subsequently, this reduces conflicts and enhances overall work efficiency. Apart from this, AI analyses staff work duration and shifts that further enable organizations to predict staffing needs and optimise resource allocation. This enhances capacity modelling to ensure the right team member is available to handle tasks during peak times, preventing overstaffing or understaffing issues. ... With expanding use cases, AI-powered facial recognition technology has become a critical part of identity verification and promoting security in remote work settings. Organisations need to ensure security and confidentiality at all stages of their work. In tandem, AI-powered facial recognition ensures that only authorized personnel have access to the company’s sensitive systems and data. 


The DPDP act: Navigating digital compliance under India’s new regulatory landscape

Adapting to the DPDPA will require tailored approaches, as different sectors face unique challenges based on their data handling practices, customer bases, and geographical scope. However, some fundamental strategies can help businesses effectively navigate this new regulatory landscape. First, conducting a comprehensive data audit is essential. Businesses need to understand what data they collect, where it is stored, and who has access to it. Mapping out data flows allows organizations to identify risks and address them proactively, laying the groundwork for robust compliance. Appointing a Data Protection Officer (DPO) is another critical step. The DPO will be responsible for overseeing compliance efforts, serving as the primary point of contact for regulatory bodies, and handling data subject requests. While it’s not yet established whether it’s mandatory or not, it is safe to say that this role is vital for embedding a culture of data privacy within the organisation. Technology can also play a significant role in ensuring compliance. Tools such as Unified Endpoint Management (UEM) solutions, encryption technologies, and data loss prevention (DLP) systems can help businesses monitor data flows, detect anomalies, and prevent unauthorized access. 


10 Things To Avoid in Domain-Driven Design (DDD)

To prevent potential issues, it is your responsibility to maintain a domain model that is uncomplicated and accurately reflects the domain. This diligent approach is important to focus on modeling the components of the domain that offer strategic importance and to streamline or exclude less critical elements. Remember, Domain-Driven Design (DDD) is primarily concerned with strategic design and not with needlessly complexifying the domain model with unnecessary intricacies. ... It's crucial to leverage Domain-Driven Design (DDD) to deeply analyze and concentrate on the domain's most vital and influential parts. Identify the aspects that deliver the highest value to the business and ensure that your modeling efforts are closely aligned with the business's overarching priorities and strategic objectives. Actively collaborating with key business stakeholders is essential to gain a comprehensive understanding of what holds the greatest value to them and subsequently prioritize these areas in your modeling endeavors. This approach will optimally reflect the business's critical needs and contribute to the successful realization of strategic goals.


How to Build a Data Governance Program in 90 Days

With a new data-friendly CIO at the helm, Hidalgo was able to assemble the right team for the job and, at the same time, create an environment of maximum engagement with data culture. She assembled discussion teams and even a data book club that read and reviewed the latest data governance literature. In turn, that team assembled its own data governance website as a platform not just for sharing ideas but also to spread the momentum. “We kept the juices flowing, kept the excitement,” Hidalgo recalled. “And then with our data governance office and steering committee, we engaged with all departments, we have people from HR, compliance, legal product, everywhere – to make sure that everyone is represented.” ... After choosing a technology platform in May, Hidalgo began the most arduous part of the process: preparation for a “jumpstart” campaign that would kick off in July. Hidalgo and her team began to catalog existing data one subset of data at a time – 20 KPIs or so – and complete its business glossary terms. Most importantly, Hidalgo had all along been building bridges between Shaw’s IT team, data governance crew, and business leadership to the degree that when the jumpstart was completed – on time – the entire business saw the immense value-add of the data governance that had been built.


Varied Cognitive Training Boosts Learning and Memory

The researchers observed that varied practice, not repetition, primed older adults to learn a new working memory task. Their findings, which appear in the journal Intelligence, propose diverse cognitive training as a promising whetstone for maintaining mental sharpness as we age. “People often think that the best way to get better at something is to simply practice it over and over again, but robust skill learning is actually supported by variation in practice,” said lead investigator Elizabeth A. L. Stine-Morrow ... The researchers narrowed their focus to working memory, or the cognitive ability to hold one thing in mind while doing something else. “We chose working memory because it is a core ability needed to engage with reality and construct knowledge,” Stine-Morrow said. “It underpins language comprehension, reasoning, problem-solving and many sorts of everyday cognition.” Because working memory often declines with aging, Stine-Morrow and her colleagues recruited 90 Champaign-Urbana locals aged 60-87. At the beginning and end of the study, researchers assessed the participants’ working memory by measuring each person’s reading span: their capacity to remember information while reading something unrelated.


Why Cloud Migrations Fail

One stumbling block on the cloud journey is misunderstanding or confusion around the shared responsibility model. This framework delineates the security obligations of cloud service providers, or CSPs, and customers. The model necessitates a clear understanding of end-user obligations and highlights the need for collaboration and diligence. Broad assumptions about the level of security oversight provided by the CSP can lead to security/data breaches that the U.S. National Security Agency (NSA) notes “likely occur more frequently than reported.” It’s also worth noting that 82% of breaches in 2023 involved cloud data. The confusion is often magnified in cases of a cloud “lift-and-shift,” a method where business-as-usual operations, architectures and practices are simply pushed into the cloud without adaptation to their new environment. In these cases, organizations may be slow to implement proper procedures, monitoring and personnel to match the security limitations of their new cloud environment. While the level of embedded security can differ depending on the selected cloud model, the customer must often enact strict security and identity and access management (IAM) controls to secure their environment.


AI - peril or promise?

The interplay between AI data centers and resource usage necessitates innovative approaches to mitigate environmental impacts. Advances in cooling technology, such as liquid immersion cooling and the use of recycled water, offer potential solutions. Furthermore, utilizing recycled or non-potable water for cooling can alleviate the pressure on freshwater resources. Moreover, AI itself can be leveraged to enhance the efficiency of data centers. AI algorithms can optimize energy use by predicting cooling needs, managing workloads more efficiently, and reducing idle times for servers. Predictive maintenance powered by AI can also prevent equipment failures, thereby reducing the need for excessive cooling. This is good news as the sector continues to use AI to benefit from greater efficiencies, cost savings, driving improvements in services with the expected impact of AI on the operational side for data centres expected to be very positive. Over 65 percent of our survey respondents reported that their organizations are regularly using generative AI, nearly double the percentage from their 2023 survey and around 90 percent of respondents expect their data centers to be more efficient as a direct result of AI applications.


HP Chief Architect Recalibrates Expectations Of Practical Quantum Computing’s Arrival From Generations To Within A Decade

Hewlett Packard Labs is now adopting a holistic co-design approach, partnering with other organizations developing various qubits and quantum software. The aim is to simulate quantum systems to solve real-world problems in solid-state physics, exotic condensed matter physics, quantum chemistry, and industrial applications. “What is it like to actually deliver the optimization we’ve been promised with quantum for quite some time, and achieve that on an industrial scale?” Bresniker posed. “That’s really what we’ve been devoting ourselves to—beginning to answer those questions of where and when quantum can make a real impact.” One of the initial challenges the team tackled was modeling benzine, an exotic chemical derived from the benzene ring. “When we initially tackled this problem with our co-design partners, the solution required 100 million qubits for 5,000 years—that’s a lot of time and qubits,” Bresniker told Frontier Enterprise. Considering current quantum capabilities are in the tens or hundreds of qubits, this was an impractical solution. By employing error correction codes and simulation methodologies, the team significantly reduced the computational requirements.


New AI reporting regulations

At its core, the new proposal requires developers and cloud service providers to fulfill reporting requirements aimed at ensuring the safety and cybersecurity resilience of AI technologies. This necessitates the disclosure of detailed information about AI models and the platforms on which they operate. One of the proposal’s key components is cybersecurity. Enterprises must now demonstrate robust security protocols and engage in what’s known as “red-teaming”—simulated attacks designed to identify and address vulnerabilities. This practice is rooted in longstanding cybersecurity practices, but it does introduce new layers of complexity and cost for cloud users. Based on the negative impact of red-teaming on enterprises, I suspect it may be challenged in the courts. The regulation does increase focus on security testing and compliance. The objective is to ensure that AI systems can withstand cyberthreats and protect data. However, this is not cheap. Achieving this result requires investments in advanced security tools and expertise, typically stretching budgets and resources. My “back of the napkin” calculations figure about 10% of the system’s total cost.



Quote for the day:

"Your greatest area of leadership often comes out of your greatest area of pain and weakness." -- Wayde Goodall

Daily Tech Digest - September 13, 2024

AI can change belief in conspiracy theories, study finds

“Our findings fundamentally challenge the view that evidence and arguments are of little use once someone has ‘gone down the rabbit hole’ and come to believe a conspiracy theory,” the team wrote. Crucially, the researchers said, the approach relies on an AI system that can draw on a vast array of information to produce conversations that encourage critical thinking and provide bespoke, fact-based counterarguments. ... “About one in four people who began the experiment believing a conspiracy theory came out the other end without that belief,” said Costello. “In most cases, the AI can only chip away – making people a bit more sceptical and uncertain – but a select few were disabused of their conspiracy entirely.” The researchers added that reducing belief in one conspiracy theory appeared to reduce participants’ belief in other such ideas, at least to a small degree, while the approach could have applications in the real world – for example, AI could reply to posts relating to conspiracy theories on social media. Prof Sander van der Linden of the University of Cambridge, who was not involved in the work, questioned whether people would engage with such AI voluntarily in the real world.


Does Value Stream Management Really Work?

Value stream management is indeed working when it is approached holistically by integrating the framework with technology and people. By mapping and optimizing every step in the customer journey, companies can eliminate waste, create efficiency and ultimately deliver sought after value to customers. The key lies in continuous improvement and stakeholder engagement throughout the value stream, ensuring alignment and commitment to delivering responsiveness and quality to customer needs, according to Saraha Burnett, chief operations officer at full service digital experience and engineering firm TMG. “Value stream management is indeed working when it is approached holistically by integrating the framework with technology and people. By mapping and optimizing every step in the customer journey, companies can eliminate waste, create efficiency and ultimately deliver sought after value to customers,” says Burnett in an email interview. “The key lies in continuous improvement and stakeholder engagement throughout the value stream, ensuring alignment and commitment to delivering responsiveness and quality to customer needs.”


Digital ID hackathons to explore real-world use cases

The hackathons aim to address the cold start program by involving verifiers to facilitate the widespread adoption of mDLs. In this context, the cold start program refers to a marketplace that relies on identity holders and verifiers. The primary focus of the hackathon will be on building minimum viable products (MVPs) that showcase the functionality of the solution. These MVPs will enable participants to test real-world use cases for mDLs. The digital version of California driver’s licenses has a variety of potential uses, according to the OpenID Foundation, including facilitating TSA security checks at airport security checkpoints, verifying age for purchasing age-restricted items, accessing DMV websites online, and using for peer-to-peer identification purposes. For the hackathon, the California DMV will issue mDLs in two formats: the ISO 18013-5 standard and the W3C Verifiable Credentials v1.1 specification. The dual issuance provides verifiers with the flexibility to choose the verification method that best aligns with their system requirements, the foundation says. Christopher Goh, the national harmonization lead for digital identity at Austroads, has written a one-pager discussing the various standards within the ISO/IEC 180130-5 framework specifically related to mDL.


Microsoft VS Code Undermined in Asian Spy Attack

"While the abuse of VSCode is concerning, in our opinion, it is not a vulnerability," Assaf Dahan, director of threat research for Unit 42, clarifies. Instead, he says, "It's a legitimate feature that was abused by threat actors, as often happens with many legitimate software." And there are a number of ways organizations can protect against a bring-your-own-VSCode attack. Besides hunting for indicators of compromise (IoCs), he says, "It's also important to consider whether the organization would want to limit or block the use of VSCode on endpoints of employees that are not developers or do not require the use of this specific app. That can reduce the attack surface." "Lastly, consider limiting access to the VSCode tunnel domains '.tunnels.api.visualstudio[.]com' or '.devtunnels[.]ms' to users with a valid business requirement. Notice that these domains are legitimate and are not malicious, but limiting access to them will prevent the feature from working properly and consequently make it less attractive for threat actors," he adds.


Rather Than Managing Your Time, Consider Managing Your Energy

“Achievement is no longer enough to be successful,” Sunderland says. “People also want to feel happy at the same time. Before, people were concerned only with thinking (mental energy) and doing (physical energy). But that success formula no longer works. Today, it’s essential to add feelings (emotional energy) and inner self-experience (spiritual energy) into the mix for people to learn how to be able to connect to and manage their energy.” ... Sunderland says all forms of human energy exist in relation to one another. “When these energies are in sync with each other, people’s energy will be in flow. People who maintain good health will be able to track those feelings (emotional energy) that flow through their bodies (physical energy), which is an essential skill to help increase energy awareness. With greater levels of energy awareness, people can grow their self-acceptance (emotional energy), which enhances their self-confidence.” He says that as confidence builds, people experience greater clarity of thought (mental energy) and they are able to increase their ability to speak truth (spiritual energy), amplifying their creative energy. 


Mastercard Enhances Real-Time Threat Visibility With Recorded Future Purchase

The payments network has made billions of dollars worth of acquisitions through the years. Within the security solutions segment of Mastercard, key focal points center on examining and protecting digital identities, protecting transactions and using insights from 143 billion annual payments to fashion real-time intelligence that can be used by merchants and FIs to anticipate new threats. By way of example, the firm acquired Ekarta in 2021 to score transactions for the likelihood of fraud through robust identity verification. All told, Mastercard has invested more than $7 billion over the past five years in its efforts to protect the digital economy. Artificial intelligence (AI) is a key ingredient here, and Gerber detailed to PYMNTS that the company has been a pioneer in harnessing generative AI to extract trends from huge swaths of data to create “identity graphs” that provide immediate value to any merchant or FI that wants to understand more about the individuals that’s interacting with them in the digital realm. The use of other “intelligence graphs” connects the dots across data points to turn threat-related data into actionable insights.


2 Open Source AI Tools That Reduce DevOps Friction

DevOps has been built upon taking everything infrastructure and transitioning it to code, aka Infrastructure as Code (IaC). This includes deployment pipelines, monitoring, repositories — anything that is built upon configurations can be represented in code. This is where AI tools like ChatGPT and AIaC come into play. AIaC, an open source command-line interface (CLI) tool, enables developers to generate IaC templates, shell scripts and more, directly from the terminal using natural language prompts. This eliminates the need to manually write and review code, making the process faster and less error-prone. ... The use of AI in DevOps is still in its early stages, but it’s quickly gaining momentum with the introduction of new open source and commercial services. The rapid pace of innovation suggests that AI will soon be embedded in most DevOps tools. From automated code generation with AIaC to advanced diagnostics with K8sGPT, the possibilities seem endless. Firefly is not just observing this revolution — it’s actively contributing to it. By integrating AI into DevOps workflows, teams can work smarter, not harder. 


How to make Infrastructure as Code secure by default

Scanning IaC templates before deployment is undeniably important; it’s an effective way to identify potential security issues early in the development process. It can help prevent security breaches and ensure that your cloud infrastructure aligns with security best practices. If you have IaC scanning tools integrated into your CI/CD pipelines, you can also run automated scans with each code commit or pull request, catching errors early. Post-deployment scans are important because they assess the infrastructure in its operational environment, which may result in finding issues that weren’t identified in dev and test environments. These scans may also identify unexpected dependencies or conflicts between resources. Any manual fixes you make to address these problems will also require you to update your existing IaC templates, otherwise any apps using those templates will be deployed with the same issues baked in. And while identifying these issues in production environments is important to overall security, it can also increase your costs and require your team to apply manual fixes to both the application and the IaC.


New brain-on-a-chip platform to deliver 460x efficiency boost for AI tasks

Despite its novel approach, IISc’s platform is designed to work alongside existing AI hardware, rather than replace it. Neuromorphic accelerators like the one developed by IISc are particularly well-suited for offloading tasks that involve repetitive matrix multiplication — a common operation in AI. “GPUs and TPUs, which are digital, are great for certain tasks, but our platform can take over when it comes to matrix multiplication. This allows for a major speed boost,” explained Goswami. ... As the demand for more advanced AI models increases, existing digital systems are nearing their energy and performance limits. Silicon-based processors, which have driven AI advancements for years, are starting to show diminishing returns in terms of speed and efficiency. “With silicon electronics reaching saturation, designing brain-inspired accelerators that can work alongside silicon chips to deliver faster, more efficient AI is becoming crucial,” Goswami noted. By working with molecular films and analog computing, IISc is offering a new path forward for AI hardware, one that could dramatically cut energy consumption while boosting computational power.


Android Trojans Still Pose a Threat, Researchers Warn

Affected users appear to have been tricked into installing the malware, which doesn't appear to be getting distributed via official Google channels. "Based on our current detections, no apps containing this malware are found on Google Play," a Google spokesperson told Information Security Media Group.* "Android users are automatically protected against known versions of this malware by Google Play Protect, which is on by default on Android devices with Google Play Services," the spokesperson said. "Google Play Protect can warn users or block apps known to exhibit malicious behavior, even when those apps come from sources outside of Play."* Researchers said they first spotted the malware when it was uploaded to analysis site VirusTotal in May from Uzbekistan, in the form of a malicious app made to appear as if it was developed by a "local tax authority." By tracing the IP address to which the malware attempted to "phone home" the researchers found other .apk - Android package - files that showed similar behavior, which they traced to attacks that began by November 2023.



Quote for the day:

"Sometimes it takes a good fall to really know where you stand." -- Hayley Williams