Daily Tech Digest - January 31, 2024

Rethinking Testing in Production

With products becoming more interconnected, trying to accurately replicate third-party APIs and integrations outside of production is close to impossible. Trunk-based development, with its focus on continuous integration and delivery, acknowledges the need for a paradigm shift. Feature flags emerge as the proverbial Archimedes lever in this transformation, offering a flexible and controlled approach to testing in production. Developers can now gradually roll out features without disrupting the entire user base, mitigating the risks associated with traditional testing methodologies. Feature flags empower developers to enable a feature in production for themselves during the development phase, allowing them to refine and perfect it before exposing it to broader testing audiences. This progressive approach ensures that potential issues are identified and addressed early in the development process. As the feature matures, it can be selectively enabled for testing teams, engineering groups or specific user segments, facilitating thorough validation at each step. The logistic nightmare of maintaining identical environments is alleviated, as testing in production becomes an integral part of the development workflow.


Enterprise Architecture in the Financial Realm

Enterprise architecture emerges as the North Star guiding banks through these changes. Its role transcends being a mere operational construct; it becomes a strategic enabler that harmonizes business and technology components. A well-crafted enterprise architecture lays the foundation for adaptability and resilience in the face of digital transformation. Enterprise architecture manifests two key characteristics: unity and agility. The unity aspect inherently provides an enterprise-level perspective, where business and IT methodologies seamlessly intertwine, creating a cohesive flow of processes and data. Conversely, agility in enterprise architecture construction involves deconstruction and subsequent reconstruction, refining shared and reusable business components, akin to assembling Lego bricks. ... Quantifying the success of digital adaptation is crucial. Metrics should not solely focus on financial outcomes but also on key performance indicators reflecting the effectiveness of digital initiatives, customer satisfaction, and the agility of operational models.


Cloud Security: Stay One Step Ahead of the Attackers

The relatively easy availability of cloud-based storage can lead to a data sprawl that is uncontrolled and unmanageable. In many cases, data which must be deleted or secured is left ungoverned, as organizations are not aware of their existence. In April 2022, cloud data security firm, Cyera, found unmanaged data store copies, and snapshots or log data. The researchers from this firm found out that 60% of the data security issues present in cloud data stores were due to unsecured sensitive data. The researchers further observed that over 30% of scanned cloud data stores were ghost data, and more than 58% of these ghost data stores contained sensitive or very sensitive data. ... Despite best practices advised by cloud service providers, data breaches that originate in the cloud have only increased. IBM’s annual Cost of a Data Breach report for example, highlights that 45% of studied breaches have occurred in the cloud. What is also noteworthy is that a significant 43% of reporting organizations which have stated they are just in the early stages or have not started implementing security practices to protect their cloud environments, have observed higher breach costs.


Five Questions That Determine Where AI Fits In Your Digital Transformation Strategy

Once you understand the why and the what, only then can you consider how your organization can use insights from AI to better accomplish its goals. How will your people respond, and how will they benefit? Today’s organizations have multiple technology partners, and they may have many that are all saying they can do AI. But how will your organization work with all those partners to make an AI solution come together? Many organizations are developing AI policies to define how it can be used. Having these guardrails ensures that your organization is operating ethically, morally and legally when it comes to the use of AI. ... It’s important to consider whether your organization is truly ready for AI at an enterprise or divisional level before deciding to implement AI at scale. Pilot projects can help you determine whether the implementation is generating the intended results and better understand how end users will interact with the processes. If you can't achieve customization and personalization across the organization, AI initiatives will be much tougher to implement.


A Dive into the Detail of the Financial Data Transparency Act’s Data Standards Requirements

The act is a major undertaking for regulators and regulated firms. It is also an opportunity for the LEI, if selected, to move to another level in the US, which has been slow to adopt the identifier, and significantly increase numbers that will strengthen the Global LEI System. While industry experts suggest regulators in scope of FDTA, collectively called Financial Stability Oversight Council (FSOC) agencies, initially considered data standards including the LEI and Financial Instrument Global Identifier published by Bloomberg, they suggest the LEI is the best match for the regulation’s requirements for ‘Covered agencies to establish “common identifiers” for information reported to covered regulatory agencies, which could include transactions and financial products/instruments.” ... The selection and implementation of a reporting taxonomy is more challenging as it will require many of the regulators to abandon existing reporting practices often based on PDFs, text and CSV files, and replace these with electronic reporting and machine-readable tagging. XBRL fits the bill, say industry experts, although there has been pushback from some agencies that see the unfunded requirement for change as too great a burden.


Data Center Approach to Legacy Modernization: When is the Right Time?

Legacy systems can lead to inefficiencies in your business. If we take one of the parameters mentioned above, such as cooling, one example of inefficiency could lie within an old server that’s no longer of use but still turned on. This could be placing unneccesary strain on your cooling, thus impacting your environmental footprint. Legacy systems may no longer be the most appropriate for your business, as newer technologies emerge that offer a more efficient method of producing the same, or better, results. If you neglect this technology, you might be giving your competitors an advantage which could be costly for your business. ... A cyber-attack takes place every 39 seconds, according to one report. This puts businesses at risk of losing or compromising not only their intellectual property and assets but also their customer’s data. This could put you at risk of damaging your reputation and even facing regulation fines. One of the best reasons to invest in digital transformation is for the security of your business. Systems that no longer receive updates can become a target of cyber-attacks and act as a vulnerability within your technology infrastructure. 


4 paths to sustainable AI

Hosting AI operations at a data center that uses renewable power is a straightforward path to reduce carbon emissions, but it’s not without tradeoffs. Online translation service Deepl runs its AI functions from four co-location facilities: two in Iceland, one in Sweden, and one in Finland. The Icelandic data center uses 100% renewably generated geothermal and hydroelectric power. The cold climate also eliminates 40% or more of the total data center power needed to cool the servers because they open the windows rather than use air conditioners, says Deepl’s director of engineering Guido Simon. Cost is another major benefit, he says, with prices of five cents per KW/hour compared to about 30 cents or more in Germany. The network latency between the user and a sustainable data center can be an issue for time-sensitive applications, says Stent, but only in the inference stage, where the application provides answers to the user, rather than the preliminary training phase. Deepl, with headquarters in Cologne, Germany, found it could run both training and inference from its remote co-location facilities. “We’re looking at roughly 20 milliseconds more latency compared to a data center closer to us,” says Simon.


Can ChatGPT drive my car? The case for LLMs in autonomy

Autonomous driving is an especially challenging problem because certain edge cases require complex, human-like reasoning that goes far beyond legacy algorithms and models. LLMs have shown promise in going beyond pure correlations to demonstrating a real “understanding of the world.” This new level of understanding extends to the driving task, enabling planners to navigate complex scenarios with safe and natural maneuvers without requiring explicit training. ... Safety-critical driving decisions must be made in less than one second. The latest LLMs running in data centers can take 10 seconds or more. One solution to this problem is hybrid-cloud architectures that supplement in-car compute with data center processing. Another is purpose-built LLMs that compress large models into form factors small enough and fast enough to fit in the car. Already we are seeing dramatic improvements in optimizing large models. Mistral 7B and Llama 2 7B have demonstrated performance rivaling GPT-3.5 with an order of magnitude fewer parameters (7 billion vs. 175 billion). Moore’s Law and continued optimizations should rapidly shift more of these models to the edge.


The Race to AI Implementation: 2024 and Beyond

The biggest problem is that the competitive and product landscape will be undergoing massive flux, so picking a strategic solution will be increasingly difficult. Younger companies that are less likely to be able to handle the speed of these advancements should focus on openness so that if they fail, someone else can pick up support, interoperability, and compatibility. If you aren’t locked into a single vendor’s solution and can mix and match as needed, you can move on or off a platform based on your needs. Like any new technology, take advice about hardware selection from the platform supplier. This means that if you are using ChatGPT, you want to ask OpenAI for advice about new hardware. If you are working with Microsoft or Google or any other AI developer, ask them what hardware they would recommend. ... You need a vendor that embraces all the client platforms for hybrid AI and one with a diverse, targeted solution set that individually focuses on the markets your firm is in. Right now, only Lenovo seems to have all the parts necessary thanks to its acquisition of Motorola.



Quote for the day:

"It's fine to celebrate success but it is more important to heed the lessons of failure." -- Bill Gates

Daily Tech Digest - January 30, 2024

Most cloud-based genAI performance stinks

Generative AI systems often comprise various components. They include data ingestion services, storage, computing, and networking. Architecting these components to work synergistically often leads to overcomplexity, where performance issues, determined by the poorest performing components, are different from isolating. I’ve seen poorly performing networks and saturated databases. Those things are not directly related to generative AI, but they can cause performance problems, nonetheless. ... Protecting AI models and their data against unauthorized access and breaches goes without saying, especially in cloud environments where multitenancy is common. Too many performance issues raise security risks. In many instances, security mechanisms, such as encryption, introduce performance issues that if not resolved will worsen as the data grows. Architecture and testing are your friends here. Take some time to understand how security affects generative AI performance. ... Related to security is adherence to data governance and compliance standards. They can impose additional layers of performance management complexity. Much like security, we need to figure out how to work with these requirements. 


Using AI and responsibility for data privacy

If the AI is a self-hosted solution without a connection to application programming interfaces (API) or other data flow to the developer/provider or other third parties, the user is likely to remain solely responsible under data protection law. The fact that the AI provider initially programmed and provided the AI system and determined the technical functionality and the algorithms used by the AI can hardly be sufficient for the AI provider to be held (co-)responsible. It is correct that with the programming the AI provider already specifies the data processing (the means) initiated later by the user, which the user adopts in the context of the subsequent concrete data processing. However, this is the case with all software and therefore cannot be deemed decisive for the role of the controller. If, on the other hand, the AI is a Software-as-a-Service (SaaS) or AI-as-a-Service and the AI provider is still involved in the data processing initiated by the user, the AI provider is at least one potential additional operator in the circle of possible controllers. However, this does not automatically make the AI provider the controller of the data processing carried out by the AI within the meaning of the GDPR.


Transformative technology trends coming to the fore in 2024

Human machine interface (HMI) will transform the way people behave in various scenarios – this includes how drivers interact with their cars, engineers work with heavy machinery, laboratory technicians operate in hazardous environments, and much more. Advanced HMI won’t all be about gesture recognition, however. Expect to see greater adoption of natural voice interfaces around the world, as AI enables more native language interactions with virtual assistants and chatbots. This should finally break the barrier that kept millions (if not billions) of potential customers away from technologies such as home assistants, which only operate in a few selected languages. ... We also need to keep a watchful eye on Gen AI for more nefarious reasons too. There is a high probability that malicious actors will also co-opt this technology to create computer viruses – leading to a surge in malware. AI is not the only cyber security concern, however. We are also seeing major developments in quantum computing. This could enable hackers to break encryptions that would currently take years to break, within minutes. 


Business privacy obligations hard to understand

Jo Stewart-Rattray, Oceania Ambassador, ISACA said the results are worrying and are cause for major concern globally, particularly around budget deficits, low confidence and lack of compliance clarity. “Every organisation in ANZ and across the world, from SMEs through to enterprise, has a responsibility to protect the privacy of its customer and stakeholder data, and many governments including Australia’s Federal government, are updating legislation to ensure best practice,” said Ms Stewart-Rattray. “It is paramount that organisations understand what is expected of them in order to devise an effective privacy policy and implement accordingly. Then will they be able to realise the benefits of embedding privacy practices in digital transformation from the outset, including customer loyalty, reputational and financial performance.” ... “When privacy teams face limited budgets and skills gaps among their workforce, it can be even more difficult to stay on top of ever evolving and expanding data privacy regulations and even increase the risk of data breaches,” says Safia Kazi, ISACA principal, privacy professional practices. 


US-based cloud companies may need to reveal client details

The proposed change can restrict the pace of innovation in the Chinese AI ecosystem as the Chinese AI developers may be subjected to greater scrutiny by the US Government. “On the other hand, for local alternatives like Baidu ERNIE, Alibaba Tongyi Qianwen, Tencent Hunyuan, Huawei Pangu, Zhipu GLM, and Baichuan, this becomes important leverage for them to focus on their innovation despite the performance gap. It will also force Chinese vendors and enterprises to further prioritize localization, accelerating the evolution of AI software and hardware ecosystem in the long run,” said Charlie Dai, Vice President and Principal Analyst at Forrester. The restrictions may have implications for the global AI ecosystem as well. “In general, this will cast a shadow over the global AI ecosystem. Firstly, foreign companies, particularly those from China, may face greater scrutiny and oversight from the US government. This increased attention could lead to delays, additional costs, and potential restrictions on the development and deployment of AI applications,” Dai said. In addition, the requirement to disclose sensitive information about technology, data usage, and business operations can raise significant concerns about IP protection. 


Great security or great UX? Both, please

A security step-up should be used only for higher-risk scenarios, such as: anomalous behavior or sensitive actions like purchasing a product, changing passwords or account information, or inputting financial details into a form. The average user of a B2B SaaS app should go months without running into a security step-up. Recognize when they make sense and get rid of those that don’t. Fewer steps are more secure because users will not become numb to the situation. In contrast, sparsely used step-ups will be perceived as an indication of a riskier environment or action that requires more care. Be smart, as well, about when you have strong enough information not to warrant a step up. For example, if a user logs in with strong 2FA like a security token and immediately goes into a sensitive process, a step-up may not be warranted because the session is short, and the authentication is recent. How you do a step up, as well, is crucial. First, tell the user why you are asking for additional information. Second, make it easy for them to follow the process by explaining precisely what will happen in the step-up and providing visual cues like breadcrumbs.


Mastering the data science gamble: Strategies for success in a volatile landscape

One of the most significant pitfalls companies face is the blind adoption of data science merely because it is the industry buzzword. Visionary implementation should not be about following trends; it should be about understanding the unique needs of the business and aligning data science initiatives with strategic goals. Companies need a clear roadmap, a vision that transcends the charm of technology trends and fads. Without a precise vision, data science initiatives are equivalent to a ship without a destination, drifting aimlessly amidst the digital sea. A well-defined strategy, coupled with risk mitigation techniques, ensures that data science efforts are not futile ventures but powerful tools driving tangible outcomes. Moreover, the landscape of data science is ever-changing. Adopting an agile approach, where hypotheses are tested rapidly, allows for quick iterations and adjustments. Being nimble in experimentation provides the flexibility to adapt models in response to evolving market demands. Rapid prototyping and experimentation allow businesses to fail fast, learn, and refine their approaches swiftly.


Ransomware’s Impact Could Include Heart Attacks, Strokes & PTSD

The psychological harm of ransomware attacks on staff is intense and is often overlooked. Considerable stress for the individuals involved in responding to ransomware attacks can lead companies to hire a post traumatic stress disorder support team. Higher levels of employees suffer from stress due to financial concerns, while middle management suffers from stress caused by extremely long workdays, including particularly stressful communications with the threat actor. IT teams are the main victims, as they suffer from extreme workday conditions and feel a direct responsibility for protecting the organization’s systems. ... Victims of ransomware attacks rarely share their experiences. In the best case, companies share an incident response report publicly to help other organizations improve their defense but also often to show their customers that they have handled the threat in a responsive way, yet a lot of organizations stay silent for various reasons: reputational concerns, fear or legal reasons. ... As stated by the RUSI in the report, “there is a real human impact to ransomware attacks that is yet to be fully grasped and measured.”


Distributed Applications Need a Consistent Security Posture

With applications and APIs being made available across clouds and on-premises data centers, a comprehensive approach to security must include an authentication platform that is flexible and extensible and that functions with the various clients required to use it. The zero trust security model framework requires per-application authentication instead of a single network-level authentication that gives access to all. It doesn’t matter if you choose a third-party identity provider or go the service provider route, but it’s important to provide a consistent authentication experience. Application end users get confused when they encounter different login experiences across different applications, and this allows attackers to attempt to capture credentials from unsuspecting employees and customers. Many developers build the authentication layer into their applications and APIs, which leads to security posture inconsistencies due to varying skill levels among developers, lack of standardization and haphazard policy enforcement, and also increases development time and costs significantly. 


We Have Only Begun To Scratch The Surface Of AI’s True Innovative Power

The potential applications that will arise through AI “are vast and can potentially transform various industries—from healthcare and education to finance and retail," says Huang. AI-driven applications such as ChatGPT or AI Copilot "are redefining user access to information, enabling a more efficient and intuitive experience in place of traditional methods such as web searching. The advance of multi-modal AI models has created a new paradigm of opportunities for businesses around context generation and retrieval." It means there will be new and far more intuitive ways of dealing with computers. With recent AI progress in large language models, "in addition to voice and video generation, we will likely see new businesses, focused on providing more natural and human-centric interactions between humans and machines, sprouting up," Huang predicts. Business leaders across the spectrum recognize that we are only starting to recognize what AI — fused with other concepts — can deliver. “AI can act as a powerful tool for serendipity, connecting disparate information and fostering unexpected discoveries,” says Bownes.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry

Daily Tech Digest - January 29, 2024

Seven critical components of new performance management

With many aspects of performance, upfront clarity is needed about the target, standard, and minimum acceptable levels. General criteria such as “5 SMART Objectives” etc risk constraining top performers or providing insufficient clarity to poor performers or those in developmental stages. General organisation-wide processes should be seen by managers as minimum requirements, not the best. Expectations should be calibrated for fairness at this stage—like setting a handicap before the metaphorical contest begins, not after the contest has ended. Monitoring and measuring is about ensuring that both the manager and the employee are engaged in monitoring and measuring all key aspects of performance (WHAT, HOW, and GROWTH). Only then will each individual receive sufficient, timely, and useful feedback to support improvement. This element also ensures that future assessment can be evidence-based. Enabling and enhancing is the key to performance management and oftentimes given insufficient attention. We know that every interaction between a manager and a member of staff can have a significant impact on that individual’s motivation and performance. 


How Are Regulators Reacting to the Speed of AI Development?

“The speed of AI development is incredibly exciting, as the finance industry stands to benefit in several ways. But we’d be naive to think such rapid technological change cannot outstrip the speed at which regulations are created and implemented. “Ensuring AI is adequately regulated remains a huge challenge. Regulators can start by developing comprehensive guidelines on AI safety to guide researchers, developers and companies. This will also help establish grounds for partnerships between academia, industry and government to foster collaboration in AI development, which brings us closer to the safe deployment and use of AI. “We can’t forget that AI is a new phenomenon in the mainstream, so we must see more initiatives to educate the public about AI and its implications, promoting transparency and understanding. It’s vital that regulators make such commitments but also pledge to fund research into AI safety and best practices. To see AI’s rapid acceleration as advantageous, and not risk reversing the fantastic progress already made, proper funding for research is non-negotiable.”


Russia hacks Microsoft: It’s worse than you think

This time around, though, Midnight Blizzard didn’t have to build a sophisticated hacking tool. To attack Microsoft, it used one of the most basic of basic hacking tricks, “password spraying.” In it, hackers type commonly-used passwords into countless random accounts, hoping one will give them access. Once they get that access, they’re free to roam throughout a network, hack into other accounts, steal email and documents, and more. In a blog post, Microsoft said Midnight Blizzard broke into an old test account using password spraying and then used the account’s permissions to get into “Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions,” and steal emails and documents attached to them. The company claims the hackers initially targeted information about Midnight Blizzard itself, and that “to date, there is no evidence that the threat actor had any access to customer environments, production systems, source code, or AI systems.” As if to reassure customers, the company noted, “The attack was not the result of a vulnerability in Microsoft products or services.”


Prioritizing Data: Why a Solid Data Management Strategy Will Be Critical in 2024

Good decisions rely on shared data, especially the right data at the right time. Sometimes, the challenge is that the data itself often raises more questions than it answers. This trend will continue to worsen before it improves, as disjointed data ecosystems with disparate tools, platforms, and disconnected data silos become increasingly challenging for enterprises. This is why the concept of a data fabric has emerged as a method to better manage and share data. Data fabric’s holistic goal is the culmination of data management tools designed to manage data from identification, access, cleaning, and enrichment to transformation, governance, and analysis. That is a tall order and will take several years to mature before adoption happens across enterprises. Current solutions were not fully developed to deliver all the promises of a data fabric. In the coming year, organizations will incorporate knowledge graphs and artificial intelligence for metadata management to improve today’s offerings, and these will be a key criterion for making them more effective. Semantic metadata will enable decentralized data management, following the data mesh paradigm. 


Transforming IT culture for business success

The “Creatorverse” work environment fosters creativity and collaboration through its blend of virtual work and state-of-the art physical workspaces, Wenhold says. “All of this keeps our culture alive and keeps Business Technology a destination department,” he adds. An obsessive focus on simplicity anchors the belief and value system underpinning IT culture at the Pacific Northwest National Laboratory (PNNL), according to Brian Abrahamson, associate lab director and chief digital officer for computing and IT. For years, the lab struggled under the weight of decentralized IT and government standards and regulations, which complicated procedures and spurred too many overly complex systems that didn’t talk to one another. Under Abrahamson’s direction, the IT organization spent the past decade embracing human-centered design principles, delivering mobile accessibility, and creating personalized and effortless consumer-grade experiences designed to create connections among scientists and give them ready access to a workbench primed for scientific discovery.


The top four governance, risk & compliance trends to watch in 2024

Financial institutions handle sensitive consumer data every day, which is a responsibility integral to maintaining the trust consumers place in banks, credit unions, and similar entities. Safeguarding this data is not only a critical duty but also subject to rigorous regulation. The gravity of this responsibility is underscored by the potential ramifications of cyber incidents, which not only jeopardise consumer information but also strain a financial institution’s technological infrastructure. The fallout may include financial losses, reputational damage, and legal consequences. While many organisations have existing cybersecurity plans and incident response programs, the focus in 2024 is expected to shift towards rigorous testing. The dynamic nature of cybersecurity threats necessitates a proactive approach to ensure these plans and programs remain effective in the face of evolving challenges. Financial institutions may increasingly turn to external consultants for assistance in developing cybersecurity incident response policies or reviewing existing plans to ensure alignment with regulatory requirements.


5 ways tech leaders can increase their business acumen

There’s an opportunity to help business stakeholders advance their technical acumen and use the dialog to develop a shared understanding of problems, opportunities, and solution tradeoffs. Humberto Moreira, principal solutions engineer at Gigster, says, “The opportunity to interact directly with technologists can also give business stakeholders a useful peek behind the curtain at how tools they use every day are developed, so this meeting of the minds can be mutually beneficial to these two groups that don’t always communicate as well as they should.” ... Engineers must recognize the scale and complexity of automation before jumping into solutions. Following one user’s journey is insufficient requirements gathering when re-engineering a complex workflow involving many people and multiple departments using a mix of technologies and manual steps. Technology teams should follow six-sigma methodologies for these challenges by documenting process flows, measuring productivity, and capturing quality defect metrics as key steps to developing business acumen before diving into automation opportunities.


AI in 2024: Should We Still Be “Moving Fast and Breaking Things”?

It was clear from the moment it arrived on the scene that generative AI’s proficiency with natural language was a gamechanger, opening up this technology to legal professionals in a way that simply wasn’t possible in the past. Additionally, as time goes on, generative AI is able to work with larger and larger blocks of text. The days when the generative AI models could only handle 1000 words are in the rearview mirror; they can now handle 200,000 words. ... The best bet here is to look for vendors with an in-depth understanding of daily legal workflows combined with an understanding of which areas would actually benefit from AI as a way to streamline, accelerate, or otherwise enhance those workflows. After all, some workflows just need some Excel rules or some other “low tech” solution – while others scream out for the efficiency that AI can bring. Established vendors with domain expertise will understand these nuances. ... An old adage in Silicon Valley famously advises companies to “move fast and break things.” There was a little bit of that mindset over the past year, as firms jumped into generative AI because it was the technology of the moment, and no one wanted to seem like they were behind the curve for such a groundbreaking new technology.


eDiscovery and Cybersecurity: Protecting Sensitive Data Throughout Legal Proceedings

In today’s digital world, hackers are a constant threat to the security of sensitive data found in legal proceedings. Even law firm computer systems can be vulnerable to a hacker attack. Hackers who harbor malicious intent could then turn around and take advantage of the stolen data, using it to steal others’ identities, commit financial fraud, or even worse. ... Law firms and attorneys are responsible for keeping client data safe and meeting privacy regulations. Not doing so results in liability lawsuits, charges of professional malpractice, and even the loss of customer confidence. Implications springing from data breaches in law don’t stop there, however. Lawsuits brought by affected individuals or regulatory bodies are a potential legal consequence of data breaches. These lawsuits can bring huge penalties for damages; they have sunk even the most inveterate firm. Legal professionals involved in a data breach also may face professional sanctions, potentially including suspension or revocation of their licenses. Ethically, the mishandling of sensitive data goes against the principles of client confidentiality and trust. 


Prioritizing cybercrime intelligence for effective decision-making in cybersecurity

Given the vast amount of cybercrime intelligence data generated daily, it is crucial for security teams to effectively prioritize the information they use for decision-making. ⁤⁤ To do this, I recommend security teams conduct regular risk assessments that should consider the organization’s risk profile, considering historical data and similar companies in their industry. ⁤ ⁤Once the risk profile is created, security teams can leverage the most suitable threat intelligence feeds and sources. ⁤ ⁤Evaluation of these risks should not be static but rather a continuous process that allows teams to regularly review and update their priorities based on the evolving threat landscape.  ... To have a balance between gathering cybercrime intelligence and respecting privacy and adhering to legal considerations, organizations need to follow strict legal compliance, including data protection laws. Organizations should also minimise the collection of sensitive information and focus only on essential data, and establish clear ethical guidelines for their intelligence gathering activities.



Quote for the day:

''Leaders draw out ones individual greatness.'' -- John Paul Warren

Daily Tech Digest - January 28, 2024

Evolution of Data Governance with Eric Falthzik

Falthzik explained that although those policies and guardrails are still important, business now moves too quickly to allow for such a slow-moving process. Workers need self-service access to data and analytics to remain competitive in the future. He added, “Enabling self-service involves some new areas of governance -- for example, pursuing active metadata management and being more diligent data quality. We also need to discuss how we’re going to go forward in a world of data products and AI.” Another key component of modern data governance Falthzik recommends is implementing a federated architecture. “A centralized environment is part of the old-school process of a small group maintaining tight control over data; it won’t work in a self-service environment,” he said. “Business workers want to feel some sense of involvement in the process of governing the data they use daily. Additionally, some new concepts such as the data mesh recommend that the data domains be given far more autonomy, which can’t be done in a centralized environment.” He also noted that an added benefit of assigning more data operations to the business is that it will help identify those who would make the best data stewards.


Tech Works: How to Build a Career Like a Pragmatic Engineer

Specialist or Generalist? This is the question Orosz is asked in his continued conversations with developers: Should they dive deep into one technology or go broad? In last month’s issue of Tech Works, Kelsey Hightower argued you have to go deep to then be able to back up and take in the big picture. “It depends on the context of your company,” Orosz said. He offered an example: “If you’re, let’s say, a native mobile engineer, and everyone around you is a native mobile engineer, and there’s no opportunities to do web development, then probably the right thing is to go deep into that technology.” After all, you will have expert native mobile engineers around you to help you become an expert, too. At another point in your career, you may find yourself at a larger company that has many opportunities to learn from different teammates, tools and contexts. Take advantage. “As a software engineer, you don’t need any book, if you’re in the right environment — you have your peers, your colleagues, your mentors, your managers,” Orosz said. “And, if you’re in a good environment, they’ll help you grow with them.”


The testing pyramid: Strategic software testing for Agile teams

CI/CD automates the process of building, testing, and deploying your code, giving you complete control over how and when your tests and other development tasks are executed. The iterative nature of CI/CD processes means they integrate perfectly with the testing pyramid model, particularly in Agile environments. A typical CI/CD pipeline executes unit tests on every commit to a development branch, providing immediate feedback to help developers catch issues early in the development cycle. Integration tests typically run after unit tests have successfully passed and before a merge into the main branch, ensuring that different components work well together before significant changes are integrated into the broader application. End-to-end (E2E) tests are usually executed after all changes have been merged into the main branch and before deployment to staging or production environments, serving as a final verification that the application meets all requirements and functions correctly in an environment that closely mimics the production setting. This approach is a boon to Agile teams, facilitating rapid development and deployment. 


Digital tools transforming approach to omnichannel: Cloud, AI ensure seamless customer experience, data security

According to McKinsey, offering an omnichannel experience is no longer an option for retail organisations – it is vital to their very survival. In its report, the consulting firm pointed out that, while organisations may well look at omnichannel operations in isolation, customers do not, expecting a seamless experience regardless of whether they are at the store, or browsing online. The role of digitisation in transforming the retail sector’s operations has been comprehensive – from marketing all the way to tailoring customer experience across channels. ... McKinsey estimates that concerted efforts made to offer a personalised omnichannel experience to the customer can help organisations register an uptick in revenue between 5% to 15%. The results that Starbucks registered a decade after after it launched a campaign allowing customers to place orders online, offered cashback and personalised rewards – USD one billion in prepaid mobile deposits – offers but a glimpse of the impact customisation of experience can have on an organisation’s bottomline. Personalisation of experience across all touchpoints would require organisations to compile large datasets on every customer to enhance the quality of one’s engagement with that specific brand.


Google’s New AI Is Learning to Diagnose Patients

Navigating health care systems as a patient can be daunting at the best of times, whether you’re interpreting jargon-filled diagnoses or determining which specialists to see next. Similarly, doctors often have grueling schedules that make it difficult to offer personalized attention to all their patients. These issues are only exacerbated in areas with limited physicians and medical infrastructure. Bringing AI into the doctor’s office to alleviate these problems is a dream that researchers have been working toward since IBM’s Watson made its debut over a decade ago, but progress toward these goals has been slow-moving. Now, large language models (LLMs), including ChatGPT, could have the potential to reinvigorate those ambitions. The team behind Google DeepMind have proposed a new AI model called AMIE (Articulate Medical Intelligence Explorer), in a recent preprint paper published 11 January on arXiv. The model could take in information from patients and provide clear explanations of medical conditions in a wellness visit consultation. Vivek Natarajan is an AI researcher at Google and lead author on the recent paper. 


Agile Methodologies for Edge Computing in IoT

Agile methodologies, with their iterative and incremental approach, are well-suited for the dynamic nature of IoT projects. They allow for continuous adaptation to changing requirements and rapid problem-solving, which is crucial in the IoT landscape where technologies and user needs evolve quickly. In the realm of IoT and edge computing, the dynamic and often unpredictable nature of projects necessitates an approach that is both flexible and robust. Agile methodologies stand out as a beacon in this landscape, offering a framework that can adapt to rapid changes and technological advancements. By embracing key Agile practices, developers and project managers can navigate the complexities of IoT and edge computing with greater ease and precision. These practices, ranging from adaptive planning and evolutionary development to early delivery and continuous improvement, are tailored to meet the unique demands of IoT projects. They facilitate efficient handling of high volumes of data, security concerns, and the integration of new technologies at the edge of networks. 


Human-Written Or Machine-Generated: Finding Intelligence In Laungauge Models

What is intelligence? Most succinctly, it is the ability to reason and reflect, as well as to learn and to possess awareness of not just the present, but also the past and future. Yet as simple as this sounds, we humans have trouble applying it in a rational fashion to everything from pets to babies born with anencephaly, where instinct and unconscious actions are mistaken for intelligence and reasoning. Much as our brains will happily see patterns and shapes where they do not exist, these same brains will accept something as human-created when it fits our preconceived notions. People will often point to the output of ChatGPT – which is usually backed by the GPT-4 LLM – as an example of ‘artificial intelligence’, but what is not mentioned here is the enormous amount of human labor involved in keeping up this appearance. A 2023 investigation by New York Magazine and The Verge uncovered the sheer numbers of so-called annotators: people who are tasked with identifying, categorizing and otherwise annotating everything from customer responses to text fragments to endless amounts of images, depending on whether the LLM and its frontend is being used for customer support


How to Navigate the Pitfalls of Toxic Positivity in the Workplace

The shift from a culture of toxic positivity to one of authenticity requires a conscious effort from organizational leaders. It involves acknowledging and embracing the full spectrum of human emotions, not just the positive ones. Leaders must create a space where employees feel safe to express their genuine feelings, whether they are positive or negative. To cultivate an authentic workplace culture, leaders must first recognize the signs of toxic positivity. These signs include a lack of genuine communication, a culture of forced niceness and an avoidance of addressing real issues. Once identified, leaders can implement strategies that foster authenticity, such as encouraging open and honest communication, creating forums for sharing diverse perspectives and recognizing and addressing the challenges employees face. ... This means celebrating successes and joys, as well as being open to hearing and understanding the challenges and struggles. It involves shifting focus from external roles, often associated with a facade of positivity, to a more profound connection with our authentic selves. When we operate from a place of authenticity, the dichotomy of toxic positivity and negativity naturally dissolves.


Embracing Software Architecture

An architect cannot be pro-active with more than 3-5 teams without changing their work (for example becoming review focused instead of design focused). Meaning a software architect will be optimally engaged with roughly this number of teams. However, software architects may scale their practice and maturity by leading larger and larger initiatives of architects/teams as long as they keep their own working relationship with a team or two. This ratio of 3-5 major stakeholders, teams, projects reoccurs a great deal when interviewing architects. The ratio isn’t just to teams, it is architects to the organization and business model. How many project/products there are in the organization is related to their size and complexity. This number of new change initiatives to architects is deeply telling. In places where that number is closer to 5% of IT, or 1 senior solution architect per medium project or larger. And where the largest projects have more than one type of architect, the surveys, interviews and success measures rise significantly. ... We say strategy and execution all the time but in fact only pay attention to strategy OR execution. Then we let ‘the technical people handle it’ or say ‘that’s a business problem’ and we keep the two separate. 


Key dimensions of cloud compliance and regulations in 2024

Firstly, organisations must identify and adhere to relevant regulations and industry standards. This involves a comprehensive understanding of the regulatory ecosystem and compliance requirements specific to their industry. They must ensure that data management practices align with established guidelines. Corporations must also acknowledge and embrace the responsibility for data stored in the cloud and place a secure configuration of the services being used. An organisation’s Internal processes are pivotal in determining the security parameters of its cloud environment, encompassing elements such as access controls, encryption, and data classification. There has to be a comprehensive understanding of the intricacies of the cloud environment’s service and deployment models. They must identify and categorise whether a service is Software as a Service (SaaS), Infrastructure as a Service (IaaS), or Platform as a Service (PaaS). Consequently, by understanding deployment models like hybrid, public, and private organisations can tailor their compliance strategies accordingly.



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - January 27, 2024

The future of biometrics in a zero trust world

Nearly one in three CEOs and members of senior management have fallen victim to phishing scams, either by clicking on the same link or sending money. C-level executives are the primary targets for biometric and deep fake attacks because they are four times more likely to be victims of phishing than other employees, according to Ivanti’s State of Security Preparedness 2023 Report. Ivanti found that whale phishing is the latest digital epidemic to attack the C-suite of thousands of companies. ... In response to the increasing need for better biometric security globally, Badge Inc. recently announced the availability of its patented authentication technology that renders personal identity information (PII) and biometric credential storage obsolete. Badge also announced an alliance with Okta, the latest in a series of partnerships aimed at strengthening Identity and Access Management (IAM) for their shared enterprise customers. Srivastava explained how her company’s approach to biometrics eliminates the need for passwords, device redirects, and knowledge-based authentication (KBA). Badge supports an enroll once and authenticate on any device workflow that scales across an enterprise’s many threat surfaces and devices. 


Understanding CQRS Architecture

CRUD and CQRS are both tactical patterns, concentrating on the implementation specifics at the level of individual services. Therefore, asserting that an organization relies entirely on a CQRS architecture may not be entirely accurate. While certain services may adopt this architecture, it is typical for other services to employ simpler paradigms. The entire organization may not adhere to a unified style for all problems. The CRUD architecture assumes the existence of a single model for both read and update operations. CRUD operations are typically linked with traditional relational database systems, and numerous applications adopt a CRUD-based approach for data management. Conversely, the CQRS architecture assumes the presence of distinct models for queries and commands. While this paradigm is more intricate to implement and introduces certain subtleties, it provides the advantage of enabling stricter enforcement of data validation, implementation of robust security measures, and optimization of performance. These definitions may appear somewhat vague and abstract at the moment, but clarity will emerge as we delve into the details. It's important to note here that CQRS or CRUD should not be regarded as an overarching philosophy to be blindly applied in all circumstances. 


Role of Wazuh in building a robust cybersecurity architecture

Wazuh is a free and open source security solution that offers unified XDR and SIEM protection across several platforms. Wazuh protects workloads across virtualized, on-premises, cloud-based, and containerized environments to provide organizations with an effective approach to cybersecurity. By collecting data from multiple sources and correlating it in real-time, it offers a broader view of an organization's security posture. Wazuh plays a significant role in implementing a cyber security architecture, providing a platform for security information and event management, active response, compliance monitoring, and more. It provides flexibility and interoperability, enabling organizations to deploy Wazuh agents across diverse operating systems. Wazuh is equipped with a File Integrity Monitoring (FIM) module that helps detect file changes on monitored endpoints. It takes this a step further by combining the FIM module with threat detection rules and threat intelligence sources to detect malicious files allowing security analysts to stay ahead of the threat curve. Wazuh also provides out-of-the-box support for compliance frameworks like PCI DSS, HIPAA, GDPR, NIST SP 800-53, and TSC. 


Budget cuts loom for data privacy initiatives

In addition to difficulty understanding the privacy regulatory landscape, organizations also face other data privacy challenges, including budget. 43% of respondents say their privacy budget is underfunded and only 36% say their budget is appropriately funded. When looking at the year ahead, only 24% say that they expect budget will increase (down 10 points from last year), and only one percent say it will remain the same (down 26 points from last year). 51% expect a decrease in budget, which is significantly higher than last year when only 12% expected a decrease in budget. For those seeking resources, technical privacy positions are in highest demand, with 62% of respondents indicating there will be increased demand for technical privacy roles in the next year, compared to 55% for legal/compliance roles. However, respondents indicate there are skills gaps among these privacy professionals; they cite experience with different types of technologies and/or applications (63%) as the biggest one. When looking at common privacy failures, respondents pinpointed the lack of or poor training (49%), not practicing privacy by design (44%) and data breaches (42%) as the main concerns.


How to become a Chief Information Security Officer

In general, the CISO position is well-paid. Due to high demand and a limited talent pool, top-tier CISOs have commanded salaries in excess of $2.3 million. Nonetheless, executive remuneration may vary based on industry, company size and specifics of a role. The CISO typically manages a team of cyber security experts (sometimes multiple teams) and collaborates with high-level business stakeholders to facilitate the strategic development and completion of cyber security initiatives. ... While experience in cyber security does count for a lot, and while smart and talented people do ascend to the CISO role without extensive formal schooling, it can pay to get the right education. Most enterprises will expect that a potential CISO have a bachelor’s degree in computer science (or a similar discipline). There are exceptions, but an undergraduate degree is often used as a credibility benchmark. ... When it comes to real-world experience, most CISO roles require a minimum of five years’ time spent in the industry. A potential CISO should maintain broad knowledge of a variety of platforms and solutions, along with a strong understanding of both cyber security history and modern day cyber security threats.


I thought software subscriptions were a ripoff until I did the math

Selling perpetual licenses means you get a big surge in revenue with each new release. But then you have to watch that cash pile dwindle as you work on the next version and try to convince your customers to pay for the upgrade. If you want the opportunity to continually improve your software, you need to bring in enough revenue each year to justify the time and resources you spend on the project. That's the difference between a sustainable business and a hobby. It strikes me that the real objection to software as a subscription isn't to the business model, but rather to the price. If you think a fair price for a piece of software is closer to $50 than $500, and you should be able to use it in perpetuity, you're telling the developer that you're willing to pay them no more than a few bucks a month. They're trying to tell you that's not enough to sustain a software business, and maybe you should try a free, open-source option instead. All the developers that are migrating to a cloud-based subscription model are taking a necessary step to help ensure their long-term survival. The challenge for companies playing in this space is to make it crystal clear that their subscriptions offer real value


Filling the Cybersecurity Talent Gap

Thankfully, there is a talented group in the veteran community ready and willing to meet the challenge. Through their unique skills, discipline, and unmatched experience, veterans are perfectly suited to help address the talent gap and growing cyber threats we face. Not only that, but veterans will find that IT and cybersecurity provide a second career as they transition out of their service. Veterans leave service with a wide range of talents that have several applications outside of the military. This includes both what are often called "soft skills," or those that are beneficial in a number of settings, as well as technical abilities well-suited for cybersecurity and IT. ... As the industry continues to incorporate more secure by design principles that guide how we approach security and cyber resiliency, we need a workforce that understands the importance of security and defense. To make this a reality, we need both the government and private companies to step up and create the right pathways for veterans to enter the workforce. This can include expanding the GI Bill to add additional incentives for careers in cybersecurity. Private companies should also offer more hands-on workshops and training that can both provide a way for applicants to learn and help companies fill their open positions.


How Much Architecture Is “Enough?”: Balancing the MVP and MVA Helps You Make Better Decisions

The critical challenge that the MVA must solve is that it must answer the MVP’s current challenges while anticipating but not actually solving future challenges. In other words, the MVA must not require unacceptable levels of rework to actually solve those future problems. Some rework is okay and expected, but the words "complete rewrite" mean that the architecture has failed and all bets on viability are off. As a result of this, the MVA hangs in a dynamic balance between solving future problems that may never exist, and letting technical debt pile up to the point where it leads to, metaphorically, architectural bankruptcy. Being able to balance these two forces is where experience comes in handy. ... The development team creates the initial MVA based on their initial and often incomplete understanding of the problems the MVA needs to solve. They will not usually have much in the way of QARs, perhaps only broad organizational "standards" that are more aspirational than accurate. These initial statements are often so vague as to be unhelpful, e.g. "the system must support very large numbers of concurrent users", "the system must be easy to support and maintain", "the system must be secure against external threats", etc.


Group permission misconfiguration exposes Google Kubernetes Engine clusters

The problem is that in most other systems “authenticated users” are users that the administrators created or defined in the system. This is also the case in privately self-managed Kubernetes clusters or for the most part in clusters set up on other cloud services providers such as Azure or AWS. So, it’s not hard to see how some administrators might conclude that system:authenticated refers to a group of verified users and then decide to use it as an easy method to assign some permissions to all those trusted users. “GKE, in contrast to Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS), exposes a far-reaching threat since it supports both anonymous and full OpenID Connect (OIDC) access,” the Orca researchers said. “Unlike AWS and Azure, GCP’s managed Kubernetes solution considers any validated Google account as an authenticated entity. Hence, system:authenticated in GKE becomes a sensitive asset administrators should not overlook.” The Kubernetes API can integrate with many authentication systems and since access to Google Cloud Platform and all of Google’s services in general is done through Google accounts, it makes sense to also integrate GKE with Google’s IAM and OAuth authentication and authorization system.


Will the Rise of Generative AI Increase Technical Debt?

The rise of generative AI-related tools will likely increase technical debt, both due to the rush to hastily adopt new capabilities and the need to mold AI models to suit specific requirements. “New LLMs and generative AI applications will undoubtedly increase technical debt in the future, or at a minimum, greatly increase the need to manage that debt proactively,” said Quillin. “It starts with new requirements to continually manage, maintain, and nurture these models from a broad range of new KPIs from bias, concept drift, and shifting business, consumer, and environmental inputs and goals,” he said. Incorporating AI may require a significant upfront commitment, leading to additional technical debt. “It won’t be just a build-and-maintain scenario, but rather, the first of many steps on a long road ahead,” said Prince Kohli, CTO of Automation Anywhere. Product companies with a generative AI focus must invest in creating a data and model strategy, a data architecture to work with AI, controls for the AI and more. “Technology disruptions and pivots such as this always lead to this kind of technical debt that must be continually paid down, but it’s the price of admittance,” he said.



Quote for the day:

''The best preparation for tomorrow is doing your best today.'' -- H. Jackson

Daily Tech Digest - January 26, 2024

Why a Chief Cyber Resilience Officer is Essential in 2024

“We'll see the role popping up more and more as an operational outcome within security programs and more of a focus in business. In the wake of the pandemic and macroeconomic conditions and everything, what business leader isn’t thinking about business resilience? So, cyber resilience tucks nicely into that.” On the surface, the standalone CISO role isn’t much different because it serves as the linchpin for securing the enterprise. There are many different flavors of CISO, with some being business-focused, says Hopkins, whose teams take on more compliance tasks as opposed to more technical security operations. Other CISOs are more technical, meaning they’ll monitor threats in the environment and respond accordingly, while compliance is a separate function. However, the stark differences between the two roles lie in the mindset, approach, and target outcome for the scenario. The CCRO’s mindset is “it’s not a matter of if, but when.” So, the CCRO’s approach is to anticipate cyber incidents and make incident response preparations that will mitigate material damage to a business. They act as a lifeline. This approach is arguably the role’s most quintessential attribute. 


How To Sell Enterprise Architecture To The Business

The best way to win buy-in for your enterprise architecture (EA) practice is to know who your stakeholders are and which of them will be the most receptive to your ideas. EA has a broad scope that impacts your entire business strategy beyond just your application portfolio, so you need to adapt your presentations to your audience. Defining the specific parts of your EA practice that matter to each stakeholder will keep your discussion relevant and impactful. Put your processes in the context of the stakeholder's business area and show the immediate value you will create and the structure that you have in place to do so. You can even offer to help install EA processes into other teams' workflows to help improve synergy with their toolsets. Just ensure that you highlight the benefits for them. Explaining to your marketing team how you plan to optimize your organization's finance software is not going to engage them. However, showcasing the information you have on your content management systems and MQL trackers will catch their interest. Once a group of key stakeholders are on-board with your EA practice, you will have a group of EA evangelists and a selection of case studies that you can use to win over more and more stakeholders. 


Quantum Breakthrough: Unveiling the Mysteries of Electron Tunneling

Tunneling is a fundamental process in quantum mechanics, involving the ability of a wave packet to cross an energy barrier that would be impossible to overcome by classical means. At the atomic level, this tunneling phenomenon significantly influences molecular biology. It aids in speeding up enzyme reactions, causes spontaneous DNA mutations, and initiates the sequences of events that lead to the sense of smell. Photoelectron tunneling is a key process in light-induced chemical reactions, charge and energy transfer, and radiation emission. The size of optoelectronic chips and other devices has been close to the sub-nanometer atomic scale, and the quantum tunneling effects between different channels would be significantly enhanced. ... This work successfully reveals the critical role of neighboring atoms in electron tunneling in sub-nanometer complex systems. This discovery provides a new way to deeply understand the key role of the Coulomb effect under the potential barrier in the electron tunneling dynamics, solid high harmonics generation, and lays a solid research foundation for probing and controlling the tunneling dynamics of complex biomolecules.


UK Intelligence Fears AI Will Fuel Ransomware, Exacerbate Cybercrime

“AI will primarily offer threat actors capability uplift in social engineering,” the NCSC said. “Generative AI (GenAI) can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing. This will highly likely increase over the next two years as models evolve and uptake increases.” The other worry deals with hackers using today’s AI models to quickly sift through the gigabytes or even terabytes of data they loot from a target. For a human it could take weeks to analyze the information, but an Al model could be programmed to quickly pluck out important details within minutes to help hackers launch new attacks or schemes against victims. ... Despite the potential risks, the NCSC's report did find one positive: “The impact of AI on the cyber threat will be offset by the use of AI to enhance cyber security resilience through detection and improved security by design.” So it’s possible the cybersecurity industry could develop AI smart enough to counter next-generation attacks. But time will tell. Meanwhile, other cybersecurity firms including Kaspersky say they've also spotted cybercriminals "exploring" using AI programs. 


Machine learning for Java developers: Algorithms for machine learning

In supervised learning, a machine learning algorithm is trained to correctly respond to questions related to feature vectors. To train an algorithm, the machine is fed a set of feature vectors and an associated label. Labels are typically provided by a human annotator and represent the right answer to a given question. The learning algorithm analyzes feature vectors and their correct labels to find internal structures and relationships between them. Thus, the machine learns to correctly respond to queries. ... In unsupervised learning, the algorithm is programmed to predict answers without human labeling, or even questions. Rather than predetermine labels or what the results should be, unsupervised learning harnesses massive data sets and processing power to discover previously unknown correlations. In consumer product marketing, for instance, unsupervised learning could be used to identify hidden relationships or consumer grouping, eventually leading to new or improved marketing strategies. ... The challenge of machine learning is to define a target function that will work as accurately as possible for unknown, unseen data instances. 


How to protect your data privacy: A digital media expert provides steps you can take and explains why you can’t go it alone

The dangers you face online take very different forms, and they require different kinds of responses. The kind of threat you hear about most in the news is the straightforwardly criminal sort of hackers and scammers. The perpetrators typically want to steal victims’ identities or money, or both. These attacks take advantage of varying legal and cultural norms around the world. Businesses and governments often offer to defend people from these kinds of threats, without mentioning that they can pose threats of their own. A second kind of threat comes from businesses that lurk in the cracks of the online economy. Lax protections allow them to scoop up vast quantities of data about people and sell it to abusive advertisers, police forces and others willing to pay. Private data brokers most people have never heard of gather data from apps, transactions and more, and they sell what they learn about you without needing your approval. A third kind of threat comes from established institutions themselves, such as the large tech companies and government agencies. These institutions promise a kind of safety if people trust them – protection from everyone but themselves, as they liberally collect your data.


Pwn2Own 2024: Tesla Hacks, Dozens of Zero-Days in Electrical Vehicles

"The attack surface of the car it's growing, and it's getting more and more interesting, because manufacturers are adding wireless connectivities, and applications that allow you to access the car remotely over the Internet," Feil says. Ken Tindell, chief technology officer of Canis Automotive Labs, seconds the point. "What is really interesting is how so much reuse of mainstream computing in cars brings along all the security problems of mainstream computing into cars." "Cars have had this two worlds thing for at least 20 years," he explains. First, "you've got mainstream computing (done not very well) in the infotainment system. We've had this in cars for a while, and it's been the source of a huge number of vulnerabilities — in Bluetooth, Wi-Fi, and so on. And then you've got the control electronics, and the two are very separate domains. Of course, you get problems when that infotainment then starts to touch the CAN bus that's talking to the brakes, headlights, and stuff like that." It's a conundrum that should be familiar to OT practitioners: managing IT equipment alongside safety-critical machinery, in such a way that the two can work together without spreading the former's nuisances to the latter. 


Does AI give InfiniBand a moment to shine? Or will Ethernet hold the line?

Ethernet’s strengths include its openness and its ability to do a more than decent job for most workloads, a factor appreciated by cloud providers and hyperscalers who either don't want to manage a dual-stack network or become dependent on the small pool of InfiniBand vendors. Nvidia's SpectrumX portfolio uses a combination of Nvidia's 51.2 Tb/s Spectrum-4 Ethernet switches and BlueField-3 SuperNICs to provide InfiniBand-like network performance, reliability, and latencies using 400 Gb/s RDMA over converged Ethernet (ROCE). Broadcom has made similar claims across its Tomahawk and Jericho switch line, which use either data processing units to manage congestion or handling this in the top of rack switch with its Jericho3-AI platform, announced last year. To Broadcom's point, hyperscalers and cloud providers such like AWS have done just that, Boujelbene said. The analyst noted that what Nvidia has done with SpectrumX is compress this work into a platform that makes it easier to achieve low-loss Ethernet. And while Microsoft has favored InfiniBand for its AI cloud infrastructure, AWS is taking advantage of improving congestion management techniques in its own Elastic Fabric Adapter 2 (EFA2) network


The Evolution & Outlook of the Chief Information Security Officer

Beyond mere implementation, the CISO also carries the mantle of education, nurturing a cybersecurity-conscious environment by making every employee cognizant of potential cyber threats and effective preventive measures. As the digital landscape shifts beneath our feet, the roles and responsibilities of the CISO have significantly evolved, casting a larger shadow over the organization’s operations and extending far beyond the traditional confines of IT risk management. No longer confined to the realms of technology alone, the CISO has become an integral component of the broader business matrix. They stand at the intersection of business and technology, needing to balance the demands of both spheres in order to effectively steer the organization towards a secure digital future. ... The increasingly digitalized and interconnected world of today has thrust the role of the Chief Information Security Officer (CISO) into the limelight. Their duties have become crucial as organizations navigate a complex and ever-evolving cybersecurity landscape. Customer data protection, adherence to intricate regulations, and ensuring seamless business operations in the face of potential cyber threats are prime priorities that necessitate the presence of a CISO. 


To Address Security Data Challenges, Decouple Your Data

Why is this a good thing? It can ultimately help you gain a holistic perspective of all the security tools you have in your organization to ensure you’re leveraging the intrinsic value of each one. Most organizations have dozens of security tools, if not more, but most lack a solid understanding or mapping of what data should go into the SIEM solution, what should come out, and what data is used for security analytics, compliance, or reporting. As data becomes more complex, extracting value and aggregating insights become more difficult. When you decide to decouple the data from the SIEM system, you have an opportunity to evaluate your data. As you move towards an integrated data layer where disparate data is consolidated, you can clean, deduplicate, and enrich it. Then you have the chance to merge that data not only with other security data but with enterprise IT and business data, too. Decoupling the data into a layer where disparate data is woven together and normalized for multidomain data use cases allows your organization to easily take HR data, organizational data, and business logic and transform it all into ready-to-use business data where security is a use case. 



Quote for the day:

“If my mind can conceive it, my heart can believe it, I know I can achieve it!” -- Jesse Jackson