Daily Tech Digest - March 08, 2024

What is the cost of not doing enterprise architecture?

Without an EA, an organisation may struggle to show how its IT projects and technology decisions align with its business goals, leading to initiatives that do not support the overall business strategy or deliver optimal value. A company favouring growth through acquisition should be buying systems and negotiating contracts that support onboarding of more users and more data/transactions without cost increasing significantly. The EA should allow for understanding which processes and technology would be impacted by the strategy, for modelling out the impact and also being used as part of the decision process. Equally, the architecture can consider strategic trends and be designed to support those, for example, bankrupt US retailer, Sears, was slow to adopt e-commerce, allowing competitors to capture the growing online shopping market. ... Your Enterprise Architecture provides a framework for making informed decisions about IT investments and strategies. Without the holistic view that EA offers, decision-makers may lack the full context for their decisions, leading to choices that are suboptimal or that fail to consider the interdependencies and long-term implications for the organisation.


Making Software Development Boring to Deliver Business Value

Boerman argued that software development should become boring. He made the distinction between boring software and exciting software: Boring software in that categorization resembles all software that has been built countless times, and will be so a billion times more. In this context, I am specifically thinking about back-end systems, though this rings true for front-end systems as well. Exciting software is all the projects that require creativity to build. Think about purpose-built algorithms, automations, AI integrations, and the like. Making software development boring again is about laying a prime focus on delivering business value, and making the delivery of these aspects predictable and repeatable, Boerman argued. This requires moving infrastructure out of the way in such a way that it is still there, but does not burden the day-to-day development process: While infrastructure takes most of the development time, it technically delivers the least amount of business value, which can be found in the data and the operations executed against it. New exciting experiments may be fast-moving and unstable, while the boring core is meant to be and remain of high quality such that it can withstand outside disruptions, Boerman concluded.


New TDWI Assessment Examines the State of Data Quality Maturity Today

“With data becoming such a critical part of a business’s ability to compete, it’s no wonder there’s a growing emphasis on data quality,” Halper began. “Organizations need better and faster insights in order to succeed, and for that they need better, more enriched data sets for advanced analytics -- such as predictive analytics and machine learning.” She explained that to do this, organizations are not only increasing the amount of traditional, structured data they’re collecting, they’re also looking for newer data types, such as unstructured text data or semistructured data from websites. Taken together, these various types of data can offer significantly more opportunities for insights, she added. As an example, Halper mentioned the idea of an organization using notes from its call center -- typically unstructured or semistructured text data -- to analyze customer satisfaction, either with a particular product or with the company as a whole. This information can then be fed back into an analytics or machine learning routine and reveal patterns or other insights meaningful to the company. “Regardless of the type of data or its end use,” she said, “the original data must be high quality. It must be accurate, complete, timely, trustworthy, and fit for purpose.”


The Five Biggest Challenges with Large-Scale Cloud Migrations

Several issues can arise when attempting to migrate legacy systems to the cloud. The system may not be optimized for cloud performance and scalability, so it is important to develop and implement solutions that boost the system’s speed and capacity to get the most from the cloud migration. Other issues common with legacy system integration include data security, data integrity, and cost management. The latter is often a particular concern because companies may also be required to pay for training and maintenance in addition to the cost of migration. ... The risks of migrating data to the cloud include data security, data corruption, and excessive downtime, which can cost money and negatively impact performance. To optimize migration success and minimize downtime, it is vital for companies to understand the amount of data involved and the bandwidth necessary to complete the transfer with minimal work disruption. ... Due to poor infrastructure and configuration, many companies cannot take advantage of the benefits of cloud computing. Often, companies fail to maximize the move from fixed infrastructure to scalable and dynamic cloud resources.


Getting the BELT: Empowering Executive Leadership in Data Governance

The active engagement of the ELT in the data governance process is critical not only for setting a strategic direction, but also for catalyzing a shift in organizational mindset. By championing the principles of NIDG, the ELT paves the way for a governance model that is both effective and sustainable. This leadership commitment helps in breaking down silos, promoting cross-departmental collaboration, and establishing a shared vision that recognizes data as a pivotal asset. Through their actions and decisions, executive leaders serve as role models, demonstrating the value of data governance and encouraging a culture of continuous improvement. Their involvement ensures that data governance initiatives are aligned with business strategies, driving the organization toward achieving its goals while maintaining data integrity and compliance. ... The journey towards effective data governance begins with buy-in, not just from the ELT, but across the entire organization. Achieving this requires the ELT to understand the strategic importance of data governance and to communicate this value convincingly. 


Going passwordless with passkeys in Windows and .NET

Passkeys managed by Windows Hello are “device-bound passkeys” tied to your PC. Windows can support other passkeys, for example passkeys stored on a nearby smartphone or on a modern security token. There’s even the option of using third parties to provide and manage passkeys, for example via a banking app or a web service. Windows passkey support allows you to save keys on third-party devices. You can use a QR code to transfer the passkey data to the device, or if it’s a linked Android smartphone, you can transfer it over a local wireless connection. In both cases the devices need a biometric identity sensor and secure storage. As an alternative, Windows will work with FIDO2-ready security keys, storing passkeys on a YubiKey or similar device. A Windows Security dialog helps you choose where to save your keys and how. If you’re saving the key on Windows, you’ll be asked to verify your identity using Windows Hello before the device is saved locally. If you’re using Windows 11 22H2 or later, you can manage passkeys through Windows settings.


Generative AI on its own will not improve the customer experience

Businesses around the world hope that, beyond the hype of generative AI, there lies a near-term path to improving business efficiency and in parallel a longer-term ability to grow revenue. There is one, not insignificant, consideration to weigh before the true savings can be measured. In 2024, as in 2023, generative AI and ChatGPT both trail "Customer Service / Telephone number" as search terms on Google in most countries. Most of those searches involve a quest by a customer to reach a human being. There is great frustration because most businesses are working hard to make it difficult to reach a person. This gap between the corporate commitment to removing the human connection in customer service and the customer's desire for a human connection almost always points to a bad business process. The business must examine why the customer doesn't use the self-service channel. This discovery process is a precursor to deeper self-service powered by generative AI. Our first recommendation is to step back and ensure the customer service process you want to supercharge with generative AI satisfies customers. 


How continuous SDL can help you build more secure software

Beyond making the SDL automated, data-driven, and transparent, Microsoft is also focused on modernizing the practices that the SDL is built on to keep up with changing technologies and ensure our products and services are secure by design and by default. In 2023, six new requirements were introduced, six were retired, and 19 received major updates. We’re investing in new threat modeling capabilities, accelerating the adoption of new memory-safe languages, and focusing on securing open-source software and the software supply chain. We’re committed to providing continued assurance to open-source software security, measuring and monitoring open-source code repositories to ensure vulnerabilities are identified and remediated on a continuous basis. Microsoft is also dedicated to bringing responsible AI into the SDL, incorporating AI into our security tooling to help developers identify and fix vulnerabilities faster. We’ve built new capabilities like the AI Red Team to find and fix vulnerabilities in AI systems. By introducing modernized practices into the SDL, we can stay ahead of attacker innovation, designing faster defenses that protect against new classes of vulnerabilities.


Rethinking SDLC security and governance: A new paradigm with identity at the forefront

Poorly governed identities have become a gateway for substantial incidents. High-profile breaches at companies like LastPass and Okta have illuminated the attackers' method: exploiting the identity attack vector to orchestrate some of the most notable breaches, using compromised accounts to potentially alter source code and extract valuable information. These events underscore a clear and present trend of identity theft through phishing or ransomware attacks, which then pave the way for attackers to infiltrate the software development lifecycle (SDLC), leading to the insertion of malicious code and the theft of data. Despite the clear risks, organizations continue to fumble in securing and managing these identities, making it the riskiest yet most overlooked attack vector facing SDLC security and governance today. As we pivot to address this critical oversight, it's imperative to understand the role of identity within the SDLC. The “Inverted Pyramid" analogy is a useful conceptual framework that captures the essence of the old and new paradigms and how reorienting our approach can better protect against these insidious threats.


Analyzing the CEO–CMO relationship and its effect on growth

It’s estimated that only 10 percent of Fortune 250 CEOs have marketing experience. There’s also a dramatic acceleration of digital technology in the world of marketing. We’re no longer judging marketing by television commercials. There’s a whole slew of different components to think through. And the data piece that you hinted at is that these customers’ signals are now everywhere. It’s incumbent upon us as marketers to interpret them and feed them back to our organizations in such a way that we don’t talk about data but we talk about insights and are able to connect the dots. ... As we come up with a means to measure marketing, the CEO or CFO needs to learn the measurement systems in place to understand what it means when I cut budget, what it means when I invest in it, and how we tie those activities to outcomes. That robust measurement system can help you understand your brand, how your customers perceive your brand, and what level of fidelity they give you credit for. That’s where the brand scores are really helpful. But you also need an econometric model to connect how the money you’re spending on different channels such as video, content, and search—all working in tandem—helps create the results you want.



Quote for the day:

"Success is the sum of small efforts, repeated day-in and day-out." -- Robert Collier

Daily Tech Digest - March 07, 2024

3 Key Metrics to Measure Developer Productivity

The team dimension considers business outcomes in a wider organizational context. While software development teams must work efficiently together, they must also work with teams across other business units. Often, non-technical factors, such as peer support, working environment, psychological safety and job enthusiasm play a significant role in boosting productivity. Another framework is SPACE, which is an acronym for satisfaction, performance, activity, communication and efficiency. SPACE was developed to capture some of the more nuanced and human-centered dimensions of productivity. SPACE metrics, in combination with DORA metrics, can fill in the productivity measurement gaps by correlating productivity metrics to business outcomes. McKinsey found that combining DORA and SPACE metrics with “opportunity-focused” metrics can produce a well-rounded view of developer productivity. That, in turn, can lead to positive outcomes, as McKinsey reports: 20% to 30% reduction in customer-reported product defects, 20% improvement in employee experience scores and 60% improvement in customer satisfaction ratings.


Metadata Governance: Crucial to Managing IoT

Governance of metadata requires formalization and agreement among stakeholders, based on existing Data Governance processes and activities. Through this program, business stakeholders engage in conversations to agree on what the data is and its context, generating standards around organizational metadata. The organization sees the results in a Business Glossary or data catalog. In addition to Data Governance tools, IT tools significantly contribute to metadata generation and usage, tracking updates, and collecting data. These applications, often equipped with machine learning capabilities, automate the gathering, processing, and delivery of metadata to identify patterns within the data without the need for manual intervention. ... The need for metadata governance services will emerge through establishing and maintaining this metadata management program. By setting up and running these services, an organization can better utilize Data Governance capabilities to collect, select, and edit metadata. Developing these processes requires time and effort, as metadata governance needs to adapt to the organization’s changing needs. 


CISOs Tackle Compliance With Cyber Guidelines

Operationally, CISOs will need to become increasingly involved with the organization as a whole -- not just the IT and security teams -- to understand the company’s overall security dynamics. “This is a much more resource-intensive process, but necessary until companies find sustainable footing in the new regulatory landscape,” Tom Kennedy, vice president of Axonius Federal Systems, explains via email. He points to the SEC disclosure mandate, which requires registrants to disclose “material cybersecurity incidents”, as a great example of how private companies are struggling to comply. From his perspective, the root problem is a lack of clarity within the mandate of what constitutes a “material” breach, and where the minimum bar should be set when it comes to a company’s security posture. “As a result, we’ve seen a large variety in companies’ recent cyber incident disclosures, including both the frequency, level of detail, and even timing,” he says. ... “The first step in fortifying your security posture is knowing what your full attack surface is -- you cannot protect what you don’t know about,” Kennedy says. “CISOs and their teams must be aware of all systems in their network -- both benign and active -- understand how they work together, what vulnerabilities they may have.”


AISecOps: Expanding DevSecOps to Secure AI and ML

AISecOps, the application of DevSecOps principles to AI/ML and generative AI, means integrating security into the life cycle of these models—from design and training to deployment and monitoring. Continuous security practices, such as real-time vulnerability scanning and automated threat detection, protection measures for the data and model repositories, are essential to safeguarding against evolving threats. One of the core tenets of DevSecOps is fostering a culture of collaboration between development, security and operations teams. This multidisciplinary approach is even more critical in the context of AISecOps, where developers, data scientists, AI researchers and cybersecurity professionals must work together to identify and mitigate risks. Collaboration and open communication channels can accelerate the identification of vulnerabilities and the implementation of fixes. Data is the lifeblood of AI and ML models. Ensuring the integrity and confidentiality of the data used for training and inference is paramount. ... Embedding security considerations from the outset is a principle that translates directly from DevSecOps to AI and ML development.


Translating Generative AI investments into tangible outcomes

Integration of Generative AI presents exciting opportunities for businesses, but it also comes with its fair share of risks. One significant concern revolves around data privacy and security. Generative AI systems often require access to vast amounts of sensitive data, raising concerns about potential breaches and unauthorised access. Moreover, there’s the challenge of ensuring the reliability and accuracy of generated outputs, as errors or inaccuracies could lead to costly consequences or damage to the brand’s reputation. Lastly, there’s the risk of over-reliance on AI-generated content, potentially diminishing human creativity and innovation within the organisation. Navigating these risks requires careful planning, robust security measures, and ongoing monitoring to ensure the responsible and effective integration of Generative AI into business operations. Consider a healthcare organisation that implements Generative AI for medical diagnosis assistance. In this scenario, the AI system requires access to sensitive patient data, including medical records, diagnostic tests, and personal information. 


Beyond the table stakes: CISO Ian Schneller on cybersecurity’s evolving role

Schneller encourages his audience to consider the gap between the demand for cyber talent and the supply of it. “Read any kind of public press,” he says, “and though the numbers may differ a bit, they’re consistent in that there are many tens, if not hundreds of thousands of open cyber positions.” In February of last year, according to Statista, about 750,000 cyber positions were open in the US alone. According to the World Economic Forum, the global number is about 3.5 million, and according to Cybercrime magazine, the disparity is expected to persist through at least 2025. As Schneller points out, this means companies will struggle to attract cyber talent, and they will have to seek it in non-traditional places. There are many tactics for attracting security talent—aligning pay to what matters, ensuring that you have clear paths for advancing careers—but all this sums to a broader point that Schneller emphasizes: branding. Your organization must convey that it takes cybersecurity seriously, that it will provide cybersecurity talent a culture in which they can solve challenging problems, advance their careers, and earn respect, contributing to the success of the business. 


Quantum Computing Demystified – Part 2

Quantum computing’s potential to invalidate current cryptographic standards necessitates a paradigm shift towards the development of quantum-resistant encryption methods, safeguarding digital infrastructures against future quantum threats. This scenario underscores the urgency in fortifying cybersecurity frameworks to withstand the capabilities of quantum algorithms. For decision-makers and policymakers, the quantum computing era presents a dual-edged sword of strategic opportunities and challenges. The imperative to embrace this nascent technology is twofold, requiring substantial investment in research, development, and education to cultivate a quantum-literate workforce. ... Bridging the quantum expertise gap through education and training is vital for fostering a skilled workforce capable of driving quantum innovation forward. Moreover, ethical and regulatory frameworks must evolve in tandem with quantum advancements to ensure equitable access and prevent misuse, thereby safeguarding societal and economic interests.


The Comprehensive Evolution Of DevSecOps In Modern Software Ecosystems

The potential for enhanced efficiency and accuracy in identifying and addressing security vulnerabilities is enormous, even though this improvement is not without its challenges, which include the possibility of algorithmic errors and shifts in job duties. Using tools that are powered by artificial intelligence, teams can prevent security breaches, perform code analysis more efficiently and automate mundane operations. This frees up human resources to be used for tackling more complicated and innovative problems. ... When using traditional software development approaches, security checks were frequently carried out at a later stage in the development cycle, which resulted in patches that were both expensive and time-consuming. The DevSecOps methodology takes a shift-left strategy, which integrates security at the beginning of the development process. This brings security to the forefront of the process. By incorporating security into the design and development phases from the beginning, this proactive technique not only decreases the likelihood of vulnerabilities being discovered after they have already been discovered, but it also speeds up the development process.


How Generative AI and Data Management Can Augment Human Interaction with Data

In contrast with ETL processes, logical data management solutions enable real-time connections to disparate data sources without physically replicating any data. This is accomplished with data virtualization, a data integration method that establishes a virtual abstraction layer between data consumers and data sources. With this architecture, logical data management solutions enable organizations to implement flexible data fabrics above their disparate data sources, regardless of whether they are legacy or modern; structured, semistructured, or unstructured; cloud or on-premises; local or overseas; or static or streaming. The result is a data fabric that seamlessly unifies these data sources so data consumers can use the data without knowing the details about where and how it is stored. In the case of generative AI, where an LLM is the “consumer,” the LLM can simply leverage the available data, regardless of its storage characteristics, so the model can do its job. Another advantage of a data fabric is that because the data is universally accessible, it can also be universally governed and secured. 


Developers don’t need performance reviews

Software development is commonly called a “team sport.” Assessing individual contributions in isolation can breed unhealthy competition, undermine teamwork, and incentivize behavior that, while technically hitting the mark, can be detrimental to good coding and good software. The pressure of performance evaluations can deter developers from innovative pursuits, pushing them towards safer paths. And developers shouldn’t be steering towards safer paths. The development environment is rapidly changing, and developers should be encouraged to experiment, try new things, and seek out innovative solutions. Worrying about hitting specific metrics squelches the impulse to try something new. Finally, a one-size-fits-all approach to performance reviews doesn’t take into account the unique nature of software development. Using the same system to evaluate developers and members of the marketing team won’t capture the unique skills found among developers. Some software developers thrive fixing bugs. Others love writing greenfield code. Some are fast but less accurate. Others are slower but highly accurate.



Quote for the day:

''Perseverance is failing nineteen times and succeeding the twentieth.'' -- Julie Andrews

Daily Tech Digest - March 06, 2024

From AML to cybersecurity: The evolving challenges of bank compliance

For banks, it is a strategic necessity to protect their financial health and reputational standing. The ability to effectively identify, assess, and mitigate these threats is critical in safeguarding against operational disruptions and legal repercussions. In this high-stakes environment, the adoption of advanced solutions, particularly automation technology, is becoming increasingly important. These tools are not merely operational aids but strategic assets that streamline compliance processes and facilitate adherence to the constantly evolving regulatory landscape. ... KYC compliance focuses on verifying client identities and assessing their financial behavior, while AML efforts are aimed at preventing money laundering through transaction monitoring and analysis. These measures serve multiple roles in banking risk and compliance, including reducing operational risk by preventing illegal activities, mitigating legal and regulatory risks to avoid fines and reputational damage, and safeguarding the financial system and society from financial crimes.


How Fintech Is Disrupting Traditional Banks in 2024

Broadly speaking, incumbent banks have adapted well to the past decade’s wave of fintech innovation, while startups have also managed to carve out meaningful market share. Both were able to drive and adapt to changing technology in the consumer banking space. Neobanks like Chime, SoFi and Varo found success providing “new front doors” for consumers — between them, the three companies’ apps were downloaded over 8 million times in 2023 alone. Meanwhile, incumbents were able to quickly adopt neobanks’ more attractive features like zero overdraft fees and continue to see substantial user base growth. Mobile app download data suggests incumbents and disruptors are both winning the race to be consumers’ primary financial relationship. On the business banking side, startup neobanks like Mercury and Brex benefited from early 2023 bank instability — receiving an estimated 29% of Silicon Valley Bank (SVB) deposit outflows. ... By facilitating “hands-off” investment and trading, the rise of roboadvisors opened the door to millions of consumers who were otherwise unreachable to wealth and asset management companies.


Suptech on the Rise As Consumer Protection & Prudential Banking Prioritised

A cultural shift is taking place alongside the digital transformation, with financial authorities creating new roles to drive suptech adoption, training staff, and collaborating across the supervisory ecosystem. Surveyed financial authorities report the biggest impact of their suptech implementation is the speed with which they are able to respond to emerging risks and take supervisory action (76 per cent). They also cite more efficient information flows between consumers and supervisors (65 per cent). This enables better and more transparent data analysis and timely response to potential issues. Suptech initiatives also positively impact consumer outcomes (52 per cent). Consequently, there has been improved protection and increased confidence in financial markets. ... “The diverse perspectives from the global supervisory community reflected in State of SupTech Report serve as the guiding force in shaping our research, training programs, and digital tools. This year’s report dives particularly deeply into the strategies and structures that dictate data flows within financial authorities, which necessarily inform how suptech solutions can be tailored and harmonised with existing supervisory processes.


Cybersecurity in the Cloud: Integrating Continuous Security Testing Within DevSecOps

To successfully integrate Continuous Security Testing (CST), you must prepare your cloud environment first. Use a manual tool like OWASP or an automated security testing process to perform a thorough security audit and ensure your cloud environments are well-protected to lay a robust groundwork for CST. Before diving into integrating Continuous Security Testing (CST) within your cloud infrastructure, it's crucial to lay a solid foundation by meticulously preparing your cloud environment. This preparatory step involves conducting a comprehensive security audit to identify vulnerabilities and ensure your cloud architecture is fortified against threats. Leveraging tools such as the Open Web Application Security Project (OWASP) for manual evaluations or employing sophisticated automated security testing processes can significantly aid this endeavor. Conduct a detailed inventory of all assets and resources within your cloud architecture to assess your cloud environment's security posture. This includes everything from data storage solutions and archives to virtual machines and network configurations.


How Leaders Can Instill Hope In Their Teams

“When something is meaningful, it helps us to answer the question ‘Why am I here?’ Amid the cost-of-living crisis and general world instability, it is important that employees are able to foster meaning in their work, as it is meaning that also brings hope to the day to day.” ... “The rising tide of conflict, complaints and concerns that we are seeing in our workplaces is contributing to high levels of anxiety and depression,” says David Liddle, CEO and chief consultant at mediation provider The TCM Group and author of Managing Conflict. “When people are spending their working days in toxic cultures, where incivility, bullying, harassment and discrimination are rife, it has a huge impact on both their physical and mental health.” ... Servantie argues that to tackle employee disengagement, leaders should “lead and inspire by example, showing that belief in change is possible, even in difficult times”. She says: “They should also remain steadfast in purpose and prioritize the growth of individuals over the growth of companies. Finally, communication and transparency in leadership are fundamental.


How to create an efficient governance control program

Your journey toward robust governance control begins with establishing a solid foundation. A house built on a shaky foundation will collapse over time. The framework of foundational practices and addressing cultural shift to security as a business concept, not a technology problem, is therefore key. It is an incremental development of proven practices to then start gauging your overall maturity and path to continuous improvement. You will need to measure and plan for today and look ahead to where you want to be. To get this view, you need to stand on solid ground, and that starts off with your governance program. While navigating this step, it’s important for you to understand your regulatory environment and build capabilities to support the compliance of your internal program to that of your sector. Bringing in stakeholder and business context will align practices to support risk management and also compliance. The controls in place will have the benefit of being informed of the requirements for control as well as a capability that will enforce a by-product of compliance. 


4 tabletop exercises every security team should run

Third-party risk management (TPRM) exercise participants should include representatives from key downstream business partners — partners who supply goods and services to the enterprise — as well as your cyber insurance provider, law enforcement, and all key stakeholders, often including the board of directors and senior management. While supply-chain attacks are ubiquitous, often they are misidentified because the actual attack might be initially identified as ransomware, an advanced persistent threat, or some other cyber threat. Often it requires the forensics team post-breach investigation to identify that the attack came through a trusted third party. ... Insider threats come in two primary types: malicious insiders who deliberately compromise corporate assets for personal, financial, political or some other gain, and those who create a security vulnerability either accidentally or simply due to lack of knowledge but without malice. In the former case, a deliberate crime against the company is committed. The latter case might involve either a user error or perhaps a user taking an action that seems reasonable to them to perform their jobs but could create a vulnerability. 


Digital Twins Are the Next Wave of Innovation, and Australia Needs to Move Quickly

In fact, in many ways, the journey of the digital twin seems to be parallel to the story of both digital transformation and AI before it — a lack of understanding of what digital twins are leads to excitement and investment, but without the right understanding, the risk of failure is higher. Gavin Cotterill, founder and managing director of Australian digital twin consultancy GC3 Digital, said in an interview with IoT Hub: “A lot of people think digital twin is just focused on a flashy 3D model, but effectively it is a master data management strategy.” “You need good quality data to support that decision making and the quality of our data, generally, is pretty poor. We have a lot of data, but we don’t know what to do with it,” Cotterill said. “Data governance, data strategy is the unsexy part of digital twin — it’s the engine room, it’s the fuel.” This means IT leaders face competing challenges with regard to digital twins. On the one hand, the appetite is there, particularly among those executives and boards to be aware of the bleeding edge of technology. On the other hand, Australian organisations, as a whole, are not ready to tackle the digital twin opportunity.


Longer coherence: How the quantum computing industry is maturing

On-premise quantum computers are currently rarities largely reserved for national computing labs and academic institutions. Most quantum processing unit (QPU) providers offer access to their systems via their own web portals and through public cloud providers. But today’s systems are rarely expected (or contracted) to run with the five-9s resiliency and redundancy we might expect from tried and tested silicon hardware. “Right now, quantum systems are more like supercomputers and they're managed with a queue; they're probably not online 24 hours, users enter jobs into a queue and get answers back as the queue executes,” says Atom’s Hays. “We are approaching how we get closer to 24/7 and how we build in redundancy and failover so that if one system has come offline for maintenance, there's another one available at all times. How do we build a system architecturally and engineering-wise, where we can do hot swaps or upgrades or changes with minimal downtime as possible?” Other providers are going through similar teething phases of how to make their systems – which are currently sensitive, temperamental, and complicated – enterprise-ready for the data centers of the world.


Why Blockchain Payments Are Misunderstood

Comparing a highly regulated system to one that sits in a gray area can be misleading. Many crypto-based remittance applications do little or no know-your-customer and anti-money laundering checks, which are costly and difficult to run. This is a cost advantage that is unlikely to last. Low levels of competition are another big driver in high payment costs. This is true both for business-to-business and consumer-to-consumer payments. ... On the business side, blockchains can drive costs down and build sustainable advantage through differentiated technology. While it is true that main-net transaction costs in Ethereum are higher, the addition of smart contract functionality changes the equation entirely. Enterprises issue payments to each other usually as part of a complex agreement. This usually means not only verifying receipt of goods or services, but also compliance with the agreed upon terms. ... Right now, the kind of fully digital end-to-end systems that smart contracts enable are the province of the world’s biggest companies. With scale and deep pockets, big companies have built integrated systems without blockchains. 



Quote for the day:

"If you don't understand that you work for your mislabeled 'subordinates,' then you know nothing of leadership. You know only tyranny." -- Dee Hock

Daily Tech Digest - March 05, 2024

Experts Warn of Risks in Memory-Safe Programming Overhauls

Memory-safety vulnerabilities can allow hackers, cybercriminals and foreign adversaries to gain unauthorized access to federal systems, they said. But the experts also warned that the challenge of migrating legacy code and information technology written in non-memory-safe languages could be too unrealistic and risky for most organizations to undertake. "Strategically focusing on eradicating memory-corruption vulnerabilities is crucial, due to their prevalence," said Chris Wysopal, co-founder and chief technology officer of Veracode. "However, completely rewriting existing software in memory-safe languages is impractical, expensive and could introduce new vulnerabilities." The report says experts have identified programming languages such as C and C++ in critical systems "that both lack traits associated with memory safety and also have high proliferation." While most enterprise software and mobile apps are already written in memory-safe languages, developers still prioritize performance over security under some scenarios, according to Jeff Williams, co-founder and chief technology officer of the security firm Contrast Security.


Hackers exploited Windows 0-day for 6 months after Microsoft knew of it

The vulnerability Lazarus exploited, tracked as CVE-2024-21338, offered considerably more stealth than BYOVD because it exploited appid.sys, a driver enabling the Windows AppLocker service, which comes pre-installed in the Microsoft OS. Avast said such vulnerabilities represent the “holy grail,” as compared to BYOVD. In August, Avast researchers sent Microsoft a description of the zero-day, along with proof-of-concept code that demonstrated what it did when exploited. Microsoft didn’t patch the vulnerability until last month. Even then, the disclosure of the active exploitation of CVE-2024-21338 and details of the Lazarus rootkit came not from Microsoft in February but from Avast 15 days later. A day later, Microsoft updated its patch bulletin to note the exploitation. It’s unclear what caused the delay or the initial lack of disclosure. Microsoft didn’t immediately have answers to questions sent by email. ... Once in place, the rootkit allowed Lazarus to bypass key Windows defenses such as Endpoint Detection and Response, Protected Process Light—which is designed to prevent endpoint protection processes from being tampered with—and the prevention of reading memory and code injection by unprotected processes.


How GenAI helps entry-level SOC analysts improve their skills

“There’s a specific set of analysts who can open it at any point in the user experience, with the context of the selected customer and all the data on their alerts and with access to our proprietary data sets,” he says. “Then the analysts can interact with it and ask questions about the investigation, such as what the next action should be.” As part of the staged rollout process for the GenAI features, Secureworks has built feedback loops that allow analysts to rate the results that the AI provides. Then the results go back to the data scientists and prompt engineers, who revise the prompts and the contextual information provided to the AI. Integrating generative AI revolutionized the way Secureworks’ junior analysts approach security operations, says Radu Leonte, the company’s VP of security operations. Instead of focusing exclusively on repetitive triage tasks, they can now handle comprehensive triage, investigation, and response. They can now triage alerts faster because all the supplementary data is brought into the platform, together with summaries and explanations, Leonte says. The accuracy and quality of triage increases as well because of fewer human comprehension errors and fewer missed detections.


Singapore reviews ways to boost digital infrastructures after big outage

The impending Digital Infrastructure Act is among the measures being developed, with the intent to complement existing regulations that focus on mitigating cyber-related risks. The ministry added that the Cybersecurity Act soon will be expanded to include "foundational digital infrastructures", such as cloud service providers and data centers as well as key entities that hold sensitive data and carry out essential public functions. The new digital infrastructure bill also will go beyond cybersecurity to encompass other resilience risks, spanning misconfigurations in technical architectures and physical hazards, such as fires, water leaks, and cooling system failures. The task force will identify digital infrastructures and services that, if disrupted, have a "systemic impact" on Singapore's economy and society. These include cloud services that facilitate the availability of widely-used digital services, such as digital identities, ride-hailing, and payments. The task force also is establishing requirements that regulated entities will be subject to under the Digital Infrastructure Act, which will consider the country's operating landscape and international developments.


Why we need both cloud engineers and cloud architects

Cloud engineers collaborate extensively with software developers and maybe do some ad hoc development. I would, however, not go so far as calling them developers since they do have other duties that are just as important and don’t require coding. What’s critical to being a cloud engineer is being “hands-on” in dealing with the complexities of cloud systems, databases, AI, governance, and security. In many cases, there are special engineering disciplines around these subtechnologies, and certainly certifications that address specifics, such as certified cloud database engineer. On the other hand, a cloud architect plays a strategic role in orchestrating the cloud computing strategy of an organization. They are responsible for designing the overarching cloud environment and ensuring its alignment with business objectives. They are not typically hands-on. They may have specializations as well, such as cloud database architect or cloud security architect. Cloud architects assess business and application requirements to craft scalable cloud solutions using the right mix of technologies. This can entail both cloud and non-cloud platforms. 


Why cyber maturity assessment should become standard practice

There are other clear benefits to the business in determining cyber maturity. By identifying gaps to security controls (and thus potential risks to the organization), it can help with reporting to the board on cyber security posture, while for the C-suite, amid a recession and skills crisis, need to be laser-focused when it comes to invest, being able to pinpoint where and how to dedicate spend is also invaluable. Moreover, as measuring maturity is a proactive risk-based process that seeks to bring about continuous improvement it can also reduce the likelihood and cost of an impact: Kroll’s State of Cyber Defense 2023 report found that those with a high level of cyber maturity experience less security incidents. And being as it is focused on process, cyber maturity can help to embed a security culture within the business. ... But there are also marked differences depending on the size of the business: SMEs will sometimes have less governance such as effective data protection or risk management processes, whereas larger enterprises, while they have the manpower and may even have a dedicated internal audit team, may be stretched or in some cases, inexperienced.


OpenAI’s Defense in Copyright Lawsuit: New York Times “Hacked ChatGPT” To Create Evidence

The “NYT hacked ChatGPT” defense directly addresses claims of damages due to the chatbot being used as a potential substitute for a subscription to the paper, much in the same way that many less sophisticated tools allow for bypassing its paywall. But the defense does not address the broader question of whether OpenAI and others have an inherent right to use a copyrighted work to train an AI model, something that will rely on court interpretations of fair use law. The US fair use doctrine has never had entirely clear terms to cover every circumstance, and is largely built on precedent established by prior court decisions as examples of alleged unauthorized use come up. That is why the outcome of this copyright lawsuit potentially carries a lot of weight. This will be the first direct test of AI use of training materials in this way. How the courts interpret this use will be absolutely vital to the futures of OpenAI and similar companies; OpenAI has already publicly stated that it is impossible to train these types of LLMs without scraping publicly accessible materials from the internet. 


Generative AI Enthusiasm Versus Expertise: A Boardroom Disconnect

Educating business leaders and stakeholders -- including those who self-identify as experts -- will be key for companies in the coming months and years. Analytics and AI experts will need to find better ways to inform key decision-makers about generative AI. That means going beyond the surface to convey an understanding of the underlying technologies, too. Companies that are serious about adopting generative AI across their entire organization must ensure they have the mechanisms to manage risk and adopt the technology responsibly. It isn’t enough for companies to create and implement a governance plan -- they must then expend the energy to enforce the guidelines they have implemented. Otherwise, companies can fall into the trap of making these and other IT policies pointless, opening the door to even greater vulnerabilities and exposure. ... In the meantime, leaders can capitalize on this board enthusiasm to help spread awareness of generative AI's importance and influence funding sources within the company. One key message to convey will be the importance of democratizing the technology’s place within the organization so as many people as possible can unlock its value.


Why your best IT managers quit

“The boss is the classic reason why managers leave,” says Greg Barrett, a senior executive advisor and senior consultant, noting that he has seen this factor, more than money, prompt top talent to resign. Such bosses tend to micromanage and keep tight control on their direct reports, rather than allowing managers the autonomy they want and need to be good leaders themselves, Kozlo says. Bev Kaye, founder and CEO of employee development, engagement, and retention consultancy BevKaye&Co, has heard from plenty of promising professionals who quit their jobs because of a bad boss. “They’d say, ‘My boss was a jerk and I couldn’t stand it anymore.’ "Bosses who are arrogant, condescending, and disrespectful are displaying “jerk behaviors," Kay says. Moreover, top performers complain when their bosses don’t cultivate personal connections that help demonstrate that they, as bosses, have a genuine interest in helping their managers succeed and advance, she says. “We ask people why they leave, and they answer, ‘My boss never really knew me, never really knew the things I loved doing and working on,’” explains Kaye, who points to the complaints she once heard workers voice as they were traveling to an event, a trip they had been given as a reward for their great performance yet they didn’t want.


Defending Operational Technology Environments: Basics Matter

"The idea that you're going to have an air gap or completely segmented or separated OT network is lunacy in this world, outside of nuclear pipelines," Lee said. "But you still don't want it to be where you can open up an email and hit a controller on your network." One test of whether an organization has an adequate focus on the basics is to see how it would fare against an already-seen threat, such as the Stuxnet malware designed to infect OT environments, which first appeared in 2010. "There are still a significant portion of infrastructure asset owners and operators that could not detect that capability today, 13 years later," Lee said. Beyond network segmentation, he said, essential security controls include monitoring ICS networks - less than 5% of which are currently being monitored - as well as requiring multifactor authentication and taking a risk-based approach to managing OT vulnerabilities. All of this remains age-old advice for protecting against current and future cybersecurity risks. "If you do the knowns, if you actually defend against the things that we know how to defend against, you get a lot of value out of the things you may not know about," he said.



Quote for the day:

"Accomplishing goals is not success. How much you expand in the process is." -- Brianna Wiest

Daily Tech Digest - March 04, 2024

Evolving Landscape of ISO Standards for GenAI

The burgeoning field of Generative AI (GenAI) presents immense potential for innovation and societal benefit. However, navigating this landscape responsibly requires addressing potential concerns regarding its development and application. Recognizing this need, the International Organization for Standardization (ISO) has embarked on the crucial task of establishing a comprehensive set of standards. ... A shared understanding of fundamental terminology is vital in any field. ISO/IEC 22989 serves as the cornerstone by establishing a common language within the AI community. This foundational standard precisely defines key terms like “artificial intelligence,” “machine learning,” and “deep learning,” ensuring clear communication and fostering collaboration and knowledge sharing among stakeholders. ... Similar to the need for blueprints in construction, ISO/IEC 23053 provides a robust framework for AI development. This standard outlines a generic structure for AI systems based on machine learning (ML) technology. This framework serves as a guide for developers, enabling them to adopt a systematic approach to designing and implementing GenAI solutions. 


Your Face For Sale: Anyone Can Legally Gather & Market Your Facial Data

We need a range of regulations on the collection and modification of facial information. We also need a stricter status of facial information itself. Thankfully, some developments in this area are looking promising. Experts at the University of Technology Sydney have proposed a comprehensive legal framework for regulating the use of facial recognition technology under Australian law. It contains proposals for regulating the first stage of non-consensual activity: the collection of personal information. That may help in the development of new laws. Regarding photo modification using AI, we’ll have to wait for announcements from the newly established government AI expert group working to develop “safe and responsible AI practices”. There are no specific discussions about a higher level of protection for our facial information in general. However, the government’s recent response to the Attorney-General’s Privacy Act review has some promising provisions. The government has agreed further consideration should be given to enhanced risk assessment requirements in the context of facial recognition technology and other uses of biometric information. 


Affective Computing: Scientists Connect Human Emotions With AI

Affective computing is a multidisciplinary field integrating computer science, engineering, psychology, neuroscience, and other related disciplines. A new and comprehensive review on affective computing was recently published in the journal Intelligent Computing. It outlines recent advancements, challenges, and future trends. Affective computing enables machines to perceive, recognize, understand, and respond to human emotions. It has various applications across different sectors, such as education, healthcare, business services and the integration of science and art. Emotional intelligence plays a significant role in human-machine interactions, and affective computing has the potential to significantly enhance these interactions. ... Affective computing, a field that combines technology with the nuanced understanding of human emotions, is experiencing surges in innovation and related ethical considerations. Innovations identified in the review include emotion-generation techniques that enhance the naturalness of human-computer interactions by increasing the realism of the facial expressions and body movements of avatars and robots. 


The open source problem

Over the years, I’ve trended toward permissive, Apache-style licensing, asserting that it’s better for community development. But is that true? It’s hard to argue against the broad community that develops Linux, for example, which is governed by the GPL. Because freedom is baked into the software, it’s harder (though not impossible) to fracture that community by forking the project. To me, this feels critical, and it’s one reason I’m revisiting the importance of software freedom (GPL, copyleft), and not merely developer/user freedom (Apache). If nothing else, as tedious as the internecine bickering was in the early debates between free software and open source (GPL versus Apache), that tension was good for software, generally. It gave project maintainers a choice in a way they really don’t have today because copyleft options disappeared when cloud came along and never recovered. Even corporations, those “evil overlords” as some believe, tended to use free and open source licenses in the pre-cloud world because they were useful. Today companies invent new licenses because the Free Software Foundation and OSI have been living in the past while software charged into the future. Individual and corporate developers lost choice along the way.


Researchers create AI worms that can spread from one system to another

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research. ... To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say. To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.


Do You Overthink? How to Avoid Analysis Paralysis in Decision Making

Welcome to the world of analysis paralysis. This phenomenon occurs when an influx of information and options leads to overthinking, creating a deadlock in decision-making. Decision makers, driven by the fear of making the wrong choice or seeking the perfect solution, may find themselves caught in a loop of analysis, reevaluation, and hesitation, consequently losing sight of the overall goal. ... Analysis paralysis impacts decision making by stifling risk taking, preventing open dialogue, and constraining innovation—all of which are essential elements for successful technology development. It often leads to mental exhaustion, reduced concentration, and increased stress from endlessly evaluating information, also known as decision fatigue. The implications of analysis paralysis include missed opportunities due to ongoing hesitation and innovative potential being restricted by cautious decision making. ... In the technology sector, the consequences of poor decisions can be far-reaching, potentially unraveling extensive work and achievements. Fear of this happening is heightened due to the sector’s competitive nature. Teams worry that a single misstep could have a cascading negative impact.


30 years of the CISO role – how things have changed since Steve Katz

Katz had no idea what the CISO job was when he accepted it in 1995. Neither did Citicorp. “They said you’ve got a blank cheque, build something great — whatever the heck it is,” Katz recounted during the 2021 podcast. “The CEO said, ‘The board has no idea, just go do something.’” Citicorp gave Katz just two directives after hiring him: “Build the best cybersecurity department in the world” and “go out and spend time with our top international banking customers to limit the damage.” ... today’s CISO must be able to communicate cyber threats in terms that line of business can understand almost instantly. “It’s the ability to articulate risk in a way that is related to the business processes in the organization,” says Fitzgerald. “You need to be able to translate what risk means. Does it mean I can’t run business operations? Does it mean we won’t be able to treat patients in our hospital because we had a ransomware attack?” Deaner says CISOs have an obvious role to play in core infosec initiatives such as implementing a business continuity plan or disaster recovery testing. ... “People in CISO circles absolutely talk a lot about liability. We’re all concerned about it,” Deaner acknowledges. “People are taking the changes to those regulations very seriously because they’re there for a reason.”


Vishing, Smishing Thrive in Gap in Enterprise, CSP Security Views

There is a significant gap between enterprises’ high expectations that their communications service provider will provide the security needed to protect them against voice and messaging scams and the level of security those CSPs offer, according to telecom and cybersecurity software maker Enea. Bad actors and state-sponsored threat groups, armed with the latest generative AI tools, are rushing to exploit that gap, a trend that is apparent in the skyrocketing numbers of smishing (text-based phishing) and vishing (voice-based frauds) that are hitting enterprises and the jump in all phishing categories since the November 2022 release of the ChatGPT chatbot by OpenAI, according to a report this week by Enea. ... “Maintaining and enhancing mobile network security is a never-ending challenge for CSPs,” the report’s authors wrote. “Mobile networks are constantly evolving – and continually being threatened by a range of threat actors who may have different objectives, but all of whom can exploit vulnerabilities and execute breaches that impact millions of subscribers and enterprises and can be highly costly to remediate.”


Causal AI: AI Confesses Why It Did What It Did

Traditional AI models are fixed in time and understand nothing. Causal AI is a different animal entirely. “Causal AI is dynamic, whereas comparable tools are static. Causal AI represents how an event impacts the world later. Such a model can be queried to find out how things might work,” says Brent Field at Infosys Consulting. “On the other hand, traditional machine learning models build a static representation of what correlates with what. They tend not to work well when the world changes, something statisticians call nonergodicity,” he says. It’s important to grok why this one point of nonergodicity is such a crucial difference to almost everything we do. “Nonergodicity is everywhere. It’s this one reason why money managers generally underperform the S&P 500 index funds. It’s why election polls are often off by many percentage points. ... Without knowing the cause of an event or potential outcome, the knowledge we extract from AI is largely backward facing even when it is forward predicting. Outputs based on historical data and events alone are by nature handicapped and sometimes useless. Causal AI seeks to remedy that.


Leveraging power quality intelligence to drive data center sustainability

The challenge is that some data centers lack the power monitoring capabilities necessary for achieving heightened efficiency and sustainability. Moreover, there needs to be more continuous power quality monitoring. Many rely on rudimentary measurements, such as voltage, current, and power parameters, gathered by intelligent rack power distribution units (PDUs), which are then transmitted to DCIM, BMS, and other infrastructure management and monitoring systems. Some consider power quality only during initial setup or occasionally revisit it when reconfiguring IT setups. This underscores the critical role of intelligent PDUs in delivering robust power quality monitoring and the imperative for data center and facility managers to steer efforts toward increased efficiency and sustainability. Certain power quality issues can have detrimental effects on the electrical reliability of a data center, leading to costly unplanned downtime and posing challenges in enhancing sustainability. ... These power quality issues can profoundly affect a data center's functionality and dependability. They may result in unforeseen downtime, harm to equipment, data loss or corruption, and reduced network efficiency. 



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - March 03, 2024

The most popular neural network styles and how they work

Feedforward networks are perhaps the most archetypal neural net. They offer a much higher degree of flexibility than perceptrons but still are fairly simple. The biggest difference in a feedforward network is that it uses more sophisticated activation functions, which usually incorporate more than one layer. The activation function in a feedforward is not just 0/1, or on/off: the nodes output a dynamic variable. ... Recurrent neural networks, or RNNs, are a style of neural network that involve data moving backward among layers. This style of neural network is also known as a cyclical graph. The backward movement opens up a variety of more sophisticated learning techniques, and also makes RNNs more complex than some other neural nets. We can say that RNNs incorporate some form of feedback. ... Convolutional neural networks, or CNNs, are designed for processing grids of data. In particular, that means images. They are used as a component in the learning and loss phase of generative AI models like stable diffusion, and for many image classification tasks. CNNs use matrix filters that act like a window moving across the two-dimensional source data, extracting information in their view and relating them together. 


The startup CIO’s guide to formalizing IT for liquidity events

“You have to stop fixing problems in the data layer, relying on data scientists to cobble together the numbers you need. And if continuing that approach is advocated by the executives you work with, if it’s considered ‘good enough,’ quit,” he says. “Getting the numbers right at the source requires that you straighten out not only the systems that hold the data, all those pipelines of information, but also the processes whereby that data is captured and managed. No tool will ever entirely erase the friction of getting people to enter their data in a CRM.” The second piece to getting the numbers right comes at the end: closing the books. While this process is a near ubiquitous struggle for all growing companies, Hoyt offers two points of optimism. “First,” he explains, “many teams struggle to close the books simply because the company hasn’t invested in the proper tools. They’ve kicked the can down the street. And second, you have a clear metric of improvement: the number of days taken to close.” Hoyt suggests investing in the proper tools and then trying to shave the days-to-close each quarter. Get your numbers right, secure your company, bring it into compliance, and iron out your ops and infrastructure. 


Majority of commercial codebases contain high-risk open-source code

Advocates of open-source software have long argued that many eyes on code lead to fewer bugs and vulnerabilities, and the report doesn’t disprove that assertion, McGuire said. “If anything, the report supports that belief,” he said. “The fact that there are so many disclosed vulnerabilities and CVEs serves as a testament to how active, vigilant, and reactive the open-source community is, especially when it comes to addressing security issues. It’s this very community that is doing the discovery, disclosure, and patching work.” However, users of open-source software aren’t doing a good job of managing it or implementing the fixes and workarounds provided by the open-source community, he said. The primary purpose of the report is to raise awareness about these issues and to help users of open-source software better mitigate the risks, he said. “We would never recommend any software producer avoid using, or tamp down their usage, of open source,” he added. “In fact, we would argue the opposite, as the benefits of open source far outweigh the risks.” Open-source software has accelerated digital transformation and allowed companies to develop innovative applications that consumers want, he said. 


From gatekeeper to guardian: Why CISOs must embrace their inner business superhero

You, the CISO, are no longer just the security guard at the front gate. You're the city planner, the risk management consultant, the chief resilience officer, and the chief of police all rolled into one. You need to understand the flow of traffic, the critical infrastructure, and the potential vulnerabilities lurking in every alleyway. But how do we, the guardians of the digital realm, transform into these business superheroes? Fear not, fellow CISOs, for the path to upskilling and growth is paved with strategic learning, effective communication, and more than a dash of inspirational or motivational leadership. ... As the lone wolf days have ended, so too have the days when technical expertise alone could guarantee a CISO’s success. Today's CISO needs to be a voracious learner, constantly expanding their knowledge and skills. ... Failure to effectively communicate is a career killer for any CXO. To be influential, especially with the C-suite, CISOs must learn to speak in ways understood by their C-suite peers. Imagine how your eyes may glaze over when a CFO starts talking capex, opex, or EBITDA. Realize the same will happen for these cybersecurity “outsiders.”


Looking good, feeling safe – data center security by design

For data centers in shared spaces, sometimes turning data halls into display features is a way to make them secure. Keeping compute in a secure but openly visible space means it’s harder to do anything unnoticed. It may also help some engineers be more mindful about keeping the halls tidy and cabling neat. “Some people keep data centers behind closed walls and keep them hidden and private. Others use them as features,” says Nick Ewing, managing director at UK modular data center provider EfficiencyIT. “The best ones are the ones where the customers like to make a feature of the environment and use it to use it as a bit of a display.” An example he cites is the Wellcome Sanger Institute in Cambridge, where they have four data center quadrants. Each quadrant is about 100 racks; they have man traps at either end of the data center corridor. But one end of the main quadrant is full of glass. “They have an LED display, which is talking about how many cores of compute, how much storage they’ve got, how many genomic sequences they've they've sequenced that day,” he says. “They've used it as a feature and used it to their advantage.”


Neuromorphic computing: The future of IoT

The adoption of neuromorphic computing in IoT promises many benefits, ranging from enhanced processing power and energy efficiency to increased reliability and adaptability. Here are some key advantages: More Powerful AI: Neuromorphic chips enable IoT devices to handle complex tasks with unprecedented speed and efficiency. By collocating memory and processing and leveraging parallel processing capabilities, these chips overcome the limitations of traditional architectures, resulting in near-real-time decision-making and enhanced cognitive abilities. Lower Power Consumption: One of the most significant advantages of neuromorphic computing is its energy efficiency. By adopting an event-driven approach and utilizing components like memristors, neuromorphic systems minimize energy consumption while maximizing performance, making them ideal for power-constrained IoT environments. Extensive Edge Networks: With the proliferation of edge computing, there is a growing need for IoT devices that can process data locally in real-time. Neuromorphic computing addresses this need by providing the processing power and adaptability required to run advanced applications at the edge, reducing reliance on centralized servers and improving overall system responsiveness.


Decentralizing the AR Cloud: Blockchain's Role in Safeguarding User Privacy

For devices to interpret the world, their camera needs access to have some kind of digital counterpart that it can cross reference. And that digital counterpart of the world is much too complex to fit inside one device. Therefore, the AR cloud has been developed. The AR cloud is a network of computers that work to help devices understand the physical world. ... The AR cloud is akin to an API to the world. The implications for applications that require knowledge about location, context, and more are considerable. In AR, the data is intimate data about where we are, who we are with, what we’re saying, looking at, and even what our living quarters look like. AR devices can read our facial expressions, and more, similar to how the Apple Watch can measure the heart rates of its wearers. Digital service providers will have access to a bevy of information and also insight into our thinking, wants, needs, and desires. Storing that data in a centralized server that is opaque is cause for concern. Blockchain allows people to take that same intimate private data, and put it on their own server from which they could access the wondrous world of AR minus such egregious privacy concerns. 


Five ways AI is helping to reduce supply chain attacks on DevOps teams

Attackers are using AI to penetrate an endpoint to steal as many forms of privileged access credentials as they can find, then use those credentials to attack other endpoints and move throughout a network. Closing the gaps between identities and endpoints is a great use case for AI. A parallel development is also gaining momentum across the leading extended detection and response (XDR) providers. CrowdStrike co-founder and CEO George Kurtz told the keynote audience at the company’s annual Fal.Con event last year, “One of the areas that we’ve really pioneered is that we can take weak signals from across different endpoints. And we can link these together to find novel detections. We’re now extending that to our third-party partners so that we can look at other weak signals across not only endpoints but across domains and come up with a novel detection.” Leading XDR platform providers include Broadcom, Cisco, CrowdStrike, Fortinet, Microsoft, Palo Alto Networks, SentinelOne, Sophos, TEHTRIS, Trend Micro and VMWare. Enhancing LLMs with telemetry and human-annotated data defines the future of endpoint security.


Blockchain transparency is a bug

Transparency isn’t a feature of decentralization that is truly needed to perform on-chain transactions securely — it’s a bug that forces Web3 users to expose their most sensitive financial data to anyone who wants to see it. Several blockchain marketing tools have emerged over the past few years, allowing marketers and salespeople to use the freely flowing on-chain data for user insights and targeted advertising. But this time, it’s not just behavioral data that is analyzed. Now, your most sensitive financial information is also added to the mix. Web3 will never become mainstream unless we manage to solve this transparency problem. Blockchain and Web3 were an escape from centralized power, making information transparent so that centralized entities cannot own one’s data. Then 2020 came, Web3 and NFTs boomed, and many started talking about how free flowing, available-to-all data is a clear improvement from your data being “stolen” by big data companies as a customer. Some may think if everyone can see the data, transparency will empower users to take ownership of and profit from their own data. Yet, transparency does not mean data can’t be appropriated nor that users are really in control.


Key Considerations to Effectively Secure Your CI/CD Pipeline

Effective security in a CI/CD pipeline begins with the definition of clear and project-specific security policies. These policies should be tailored to the unique requirements and risks associated with each project. Whether it's compliance standards, data protection regulations, or industry-specific security measures (e.g., PCI DSS, HDS, FedRamp), organizations need to define and enforce policies that align with their security objectives. Once security policies are defined, automation plays a crucial role in their enforcement. Automated tools can scan code, infrastructure configurations, and deployment artifacts to ensure compliance with established security policies. This automation not only accelerates the security validation process but also reduces the likelihood of human error, ensuring consistent and reliable enforcement. In the DevSecOps paradigm, the integration of security gates within the CI/CD pipeline is pivotal to ensuring that security measures are an inherent part of the software development lifecycle. If you set up security scans or controls that users can bypass, those methods become totally useless — you want them to become mandatory.



Quote for the day:

"It is better to fail in originality than to succeed in imitation." -- Herman Melville