Daily Tech Digest - January 28, 2024

Evolution of Data Governance with Eric Falthzik

Falthzik explained that although those policies and guardrails are still important, business now moves too quickly to allow for such a slow-moving process. Workers need self-service access to data and analytics to remain competitive in the future. He added, “Enabling self-service involves some new areas of governance -- for example, pursuing active metadata management and being more diligent data quality. We also need to discuss how we’re going to go forward in a world of data products and AI.” Another key component of modern data governance Falthzik recommends is implementing a federated architecture. “A centralized environment is part of the old-school process of a small group maintaining tight control over data; it won’t work in a self-service environment,” he said. “Business workers want to feel some sense of involvement in the process of governing the data they use daily. Additionally, some new concepts such as the data mesh recommend that the data domains be given far more autonomy, which can’t be done in a centralized environment.” He also noted that an added benefit of assigning more data operations to the business is that it will help identify those who would make the best data stewards.


Tech Works: How to Build a Career Like a Pragmatic Engineer

Specialist or Generalist? This is the question Orosz is asked in his continued conversations with developers: Should they dive deep into one technology or go broad? In last month’s issue of Tech Works, Kelsey Hightower argued you have to go deep to then be able to back up and take in the big picture. “It depends on the context of your company,” Orosz said. He offered an example: “If you’re, let’s say, a native mobile engineer, and everyone around you is a native mobile engineer, and there’s no opportunities to do web development, then probably the right thing is to go deep into that technology.” After all, you will have expert native mobile engineers around you to help you become an expert, too. At another point in your career, you may find yourself at a larger company that has many opportunities to learn from different teammates, tools and contexts. Take advantage. “As a software engineer, you don’t need any book, if you’re in the right environment — you have your peers, your colleagues, your mentors, your managers,” Orosz said. “And, if you’re in a good environment, they’ll help you grow with them.”


The testing pyramid: Strategic software testing for Agile teams

CI/CD automates the process of building, testing, and deploying your code, giving you complete control over how and when your tests and other development tasks are executed. The iterative nature of CI/CD processes means they integrate perfectly with the testing pyramid model, particularly in Agile environments. A typical CI/CD pipeline executes unit tests on every commit to a development branch, providing immediate feedback to help developers catch issues early in the development cycle. Integration tests typically run after unit tests have successfully passed and before a merge into the main branch, ensuring that different components work well together before significant changes are integrated into the broader application. End-to-end (E2E) tests are usually executed after all changes have been merged into the main branch and before deployment to staging or production environments, serving as a final verification that the application meets all requirements and functions correctly in an environment that closely mimics the production setting. This approach is a boon to Agile teams, facilitating rapid development and deployment. 


Digital tools transforming approach to omnichannel: Cloud, AI ensure seamless customer experience, data security

According to McKinsey, offering an omnichannel experience is no longer an option for retail organisations – it is vital to their very survival. In its report, the consulting firm pointed out that, while organisations may well look at omnichannel operations in isolation, customers do not, expecting a seamless experience regardless of whether they are at the store, or browsing online. The role of digitisation in transforming the retail sector’s operations has been comprehensive – from marketing all the way to tailoring customer experience across channels. ... McKinsey estimates that concerted efforts made to offer a personalised omnichannel experience to the customer can help organisations register an uptick in revenue between 5% to 15%. The results that Starbucks registered a decade after after it launched a campaign allowing customers to place orders online, offered cashback and personalised rewards – USD one billion in prepaid mobile deposits – offers but a glimpse of the impact customisation of experience can have on an organisation’s bottomline. Personalisation of experience across all touchpoints would require organisations to compile large datasets on every customer to enhance the quality of one’s engagement with that specific brand.


Google’s New AI Is Learning to Diagnose Patients

Navigating health care systems as a patient can be daunting at the best of times, whether you’re interpreting jargon-filled diagnoses or determining which specialists to see next. Similarly, doctors often have grueling schedules that make it difficult to offer personalized attention to all their patients. These issues are only exacerbated in areas with limited physicians and medical infrastructure. Bringing AI into the doctor’s office to alleviate these problems is a dream that researchers have been working toward since IBM’s Watson made its debut over a decade ago, but progress toward these goals has been slow-moving. Now, large language models (LLMs), including ChatGPT, could have the potential to reinvigorate those ambitions. The team behind Google DeepMind have proposed a new AI model called AMIE (Articulate Medical Intelligence Explorer), in a recent preprint paper published 11 January on arXiv. The model could take in information from patients and provide clear explanations of medical conditions in a wellness visit consultation. Vivek Natarajan is an AI researcher at Google and lead author on the recent paper. 


Agile Methodologies for Edge Computing in IoT

Agile methodologies, with their iterative and incremental approach, are well-suited for the dynamic nature of IoT projects. They allow for continuous adaptation to changing requirements and rapid problem-solving, which is crucial in the IoT landscape where technologies and user needs evolve quickly. In the realm of IoT and edge computing, the dynamic and often unpredictable nature of projects necessitates an approach that is both flexible and robust. Agile methodologies stand out as a beacon in this landscape, offering a framework that can adapt to rapid changes and technological advancements. By embracing key Agile practices, developers and project managers can navigate the complexities of IoT and edge computing with greater ease and precision. These practices, ranging from adaptive planning and evolutionary development to early delivery and continuous improvement, are tailored to meet the unique demands of IoT projects. They facilitate efficient handling of high volumes of data, security concerns, and the integration of new technologies at the edge of networks. 


Human-Written Or Machine-Generated: Finding Intelligence In Laungauge Models

What is intelligence? Most succinctly, it is the ability to reason and reflect, as well as to learn and to possess awareness of not just the present, but also the past and future. Yet as simple as this sounds, we humans have trouble applying it in a rational fashion to everything from pets to babies born with anencephaly, where instinct and unconscious actions are mistaken for intelligence and reasoning. Much as our brains will happily see patterns and shapes where they do not exist, these same brains will accept something as human-created when it fits our preconceived notions. People will often point to the output of ChatGPT – which is usually backed by the GPT-4 LLM – as an example of ‘artificial intelligence’, but what is not mentioned here is the enormous amount of human labor involved in keeping up this appearance. A 2023 investigation by New York Magazine and The Verge uncovered the sheer numbers of so-called annotators: people who are tasked with identifying, categorizing and otherwise annotating everything from customer responses to text fragments to endless amounts of images, depending on whether the LLM and its frontend is being used for customer support


How to Navigate the Pitfalls of Toxic Positivity in the Workplace

The shift from a culture of toxic positivity to one of authenticity requires a conscious effort from organizational leaders. It involves acknowledging and embracing the full spectrum of human emotions, not just the positive ones. Leaders must create a space where employees feel safe to express their genuine feelings, whether they are positive or negative. To cultivate an authentic workplace culture, leaders must first recognize the signs of toxic positivity. These signs include a lack of genuine communication, a culture of forced niceness and an avoidance of addressing real issues. Once identified, leaders can implement strategies that foster authenticity, such as encouraging open and honest communication, creating forums for sharing diverse perspectives and recognizing and addressing the challenges employees face. ... This means celebrating successes and joys, as well as being open to hearing and understanding the challenges and struggles. It involves shifting focus from external roles, often associated with a facade of positivity, to a more profound connection with our authentic selves. When we operate from a place of authenticity, the dichotomy of toxic positivity and negativity naturally dissolves.


Embracing Software Architecture

An architect cannot be pro-active with more than 3-5 teams without changing their work (for example becoming review focused instead of design focused). Meaning a software architect will be optimally engaged with roughly this number of teams. However, software architects may scale their practice and maturity by leading larger and larger initiatives of architects/teams as long as they keep their own working relationship with a team or two. This ratio of 3-5 major stakeholders, teams, projects reoccurs a great deal when interviewing architects. The ratio isn’t just to teams, it is architects to the organization and business model. How many project/products there are in the organization is related to their size and complexity. This number of new change initiatives to architects is deeply telling. In places where that number is closer to 5% of IT, or 1 senior solution architect per medium project or larger. And where the largest projects have more than one type of architect, the surveys, interviews and success measures rise significantly. ... We say strategy and execution all the time but in fact only pay attention to strategy OR execution. Then we let ‘the technical people handle it’ or say ‘that’s a business problem’ and we keep the two separate. 


Key dimensions of cloud compliance and regulations in 2024

Firstly, organisations must identify and adhere to relevant regulations and industry standards. This involves a comprehensive understanding of the regulatory ecosystem and compliance requirements specific to their industry. They must ensure that data management practices align with established guidelines. Corporations must also acknowledge and embrace the responsibility for data stored in the cloud and place a secure configuration of the services being used. An organisation’s Internal processes are pivotal in determining the security parameters of its cloud environment, encompassing elements such as access controls, encryption, and data classification. There has to be a comprehensive understanding of the intricacies of the cloud environment’s service and deployment models. They must identify and categorise whether a service is Software as a Service (SaaS), Infrastructure as a Service (IaaS), or Platform as a Service (PaaS). Consequently, by understanding deployment models like hybrid, public, and private organisations can tailor their compliance strategies accordingly.



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - January 27, 2024

The future of biometrics in a zero trust world

Nearly one in three CEOs and members of senior management have fallen victim to phishing scams, either by clicking on the same link or sending money. C-level executives are the primary targets for biometric and deep fake attacks because they are four times more likely to be victims of phishing than other employees, according to Ivanti’s State of Security Preparedness 2023 Report. Ivanti found that whale phishing is the latest digital epidemic to attack the C-suite of thousands of companies. ... In response to the increasing need for better biometric security globally, Badge Inc. recently announced the availability of its patented authentication technology that renders personal identity information (PII) and biometric credential storage obsolete. Badge also announced an alliance with Okta, the latest in a series of partnerships aimed at strengthening Identity and Access Management (IAM) for their shared enterprise customers. Srivastava explained how her company’s approach to biometrics eliminates the need for passwords, device redirects, and knowledge-based authentication (KBA). Badge supports an enroll once and authenticate on any device workflow that scales across an enterprise’s many threat surfaces and devices. 


Understanding CQRS Architecture

CRUD and CQRS are both tactical patterns, concentrating on the implementation specifics at the level of individual services. Therefore, asserting that an organization relies entirely on a CQRS architecture may not be entirely accurate. While certain services may adopt this architecture, it is typical for other services to employ simpler paradigms. The entire organization may not adhere to a unified style for all problems. The CRUD architecture assumes the existence of a single model for both read and update operations. CRUD operations are typically linked with traditional relational database systems, and numerous applications adopt a CRUD-based approach for data management. Conversely, the CQRS architecture assumes the presence of distinct models for queries and commands. While this paradigm is more intricate to implement and introduces certain subtleties, it provides the advantage of enabling stricter enforcement of data validation, implementation of robust security measures, and optimization of performance. These definitions may appear somewhat vague and abstract at the moment, but clarity will emerge as we delve into the details. It's important to note here that CQRS or CRUD should not be regarded as an overarching philosophy to be blindly applied in all circumstances. 


Role of Wazuh in building a robust cybersecurity architecture

Wazuh is a free and open source security solution that offers unified XDR and SIEM protection across several platforms. Wazuh protects workloads across virtualized, on-premises, cloud-based, and containerized environments to provide organizations with an effective approach to cybersecurity. By collecting data from multiple sources and correlating it in real-time, it offers a broader view of an organization's security posture. Wazuh plays a significant role in implementing a cyber security architecture, providing a platform for security information and event management, active response, compliance monitoring, and more. It provides flexibility and interoperability, enabling organizations to deploy Wazuh agents across diverse operating systems. Wazuh is equipped with a File Integrity Monitoring (FIM) module that helps detect file changes on monitored endpoints. It takes this a step further by combining the FIM module with threat detection rules and threat intelligence sources to detect malicious files allowing security analysts to stay ahead of the threat curve. Wazuh also provides out-of-the-box support for compliance frameworks like PCI DSS, HIPAA, GDPR, NIST SP 800-53, and TSC. 


Budget cuts loom for data privacy initiatives

In addition to difficulty understanding the privacy regulatory landscape, organizations also face other data privacy challenges, including budget. 43% of respondents say their privacy budget is underfunded and only 36% say their budget is appropriately funded. When looking at the year ahead, only 24% say that they expect budget will increase (down 10 points from last year), and only one percent say it will remain the same (down 26 points from last year). 51% expect a decrease in budget, which is significantly higher than last year when only 12% expected a decrease in budget. For those seeking resources, technical privacy positions are in highest demand, with 62% of respondents indicating there will be increased demand for technical privacy roles in the next year, compared to 55% for legal/compliance roles. However, respondents indicate there are skills gaps among these privacy professionals; they cite experience with different types of technologies and/or applications (63%) as the biggest one. When looking at common privacy failures, respondents pinpointed the lack of or poor training (49%), not practicing privacy by design (44%) and data breaches (42%) as the main concerns.


How to become a Chief Information Security Officer

In general, the CISO position is well-paid. Due to high demand and a limited talent pool, top-tier CISOs have commanded salaries in excess of $2.3 million. Nonetheless, executive remuneration may vary based on industry, company size and specifics of a role. The CISO typically manages a team of cyber security experts (sometimes multiple teams) and collaborates with high-level business stakeholders to facilitate the strategic development and completion of cyber security initiatives. ... While experience in cyber security does count for a lot, and while smart and talented people do ascend to the CISO role without extensive formal schooling, it can pay to get the right education. Most enterprises will expect that a potential CISO have a bachelor’s degree in computer science (or a similar discipline). There are exceptions, but an undergraduate degree is often used as a credibility benchmark. ... When it comes to real-world experience, most CISO roles require a minimum of five years’ time spent in the industry. A potential CISO should maintain broad knowledge of a variety of platforms and solutions, along with a strong understanding of both cyber security history and modern day cyber security threats.


I thought software subscriptions were a ripoff until I did the math

Selling perpetual licenses means you get a big surge in revenue with each new release. But then you have to watch that cash pile dwindle as you work on the next version and try to convince your customers to pay for the upgrade. If you want the opportunity to continually improve your software, you need to bring in enough revenue each year to justify the time and resources you spend on the project. That's the difference between a sustainable business and a hobby. It strikes me that the real objection to software as a subscription isn't to the business model, but rather to the price. If you think a fair price for a piece of software is closer to $50 than $500, and you should be able to use it in perpetuity, you're telling the developer that you're willing to pay them no more than a few bucks a month. They're trying to tell you that's not enough to sustain a software business, and maybe you should try a free, open-source option instead. All the developers that are migrating to a cloud-based subscription model are taking a necessary step to help ensure their long-term survival. The challenge for companies playing in this space is to make it crystal clear that their subscriptions offer real value


Filling the Cybersecurity Talent Gap

Thankfully, there is a talented group in the veteran community ready and willing to meet the challenge. Through their unique skills, discipline, and unmatched experience, veterans are perfectly suited to help address the talent gap and growing cyber threats we face. Not only that, but veterans will find that IT and cybersecurity provide a second career as they transition out of their service. Veterans leave service with a wide range of talents that have several applications outside of the military. This includes both what are often called "soft skills," or those that are beneficial in a number of settings, as well as technical abilities well-suited for cybersecurity and IT. ... As the industry continues to incorporate more secure by design principles that guide how we approach security and cyber resiliency, we need a workforce that understands the importance of security and defense. To make this a reality, we need both the government and private companies to step up and create the right pathways for veterans to enter the workforce. This can include expanding the GI Bill to add additional incentives for careers in cybersecurity. Private companies should also offer more hands-on workshops and training that can both provide a way for applicants to learn and help companies fill their open positions.


How Much Architecture Is “Enough?”: Balancing the MVP and MVA Helps You Make Better Decisions

The critical challenge that the MVA must solve is that it must answer the MVP’s current challenges while anticipating but not actually solving future challenges. In other words, the MVA must not require unacceptable levels of rework to actually solve those future problems. Some rework is okay and expected, but the words "complete rewrite" mean that the architecture has failed and all bets on viability are off. As a result of this, the MVA hangs in a dynamic balance between solving future problems that may never exist, and letting technical debt pile up to the point where it leads to, metaphorically, architectural bankruptcy. Being able to balance these two forces is where experience comes in handy. ... The development team creates the initial MVA based on their initial and often incomplete understanding of the problems the MVA needs to solve. They will not usually have much in the way of QARs, perhaps only broad organizational "standards" that are more aspirational than accurate. These initial statements are often so vague as to be unhelpful, e.g. "the system must support very large numbers of concurrent users", "the system must be easy to support and maintain", "the system must be secure against external threats", etc.


Group permission misconfiguration exposes Google Kubernetes Engine clusters

The problem is that in most other systems “authenticated users” are users that the administrators created or defined in the system. This is also the case in privately self-managed Kubernetes clusters or for the most part in clusters set up on other cloud services providers such as Azure or AWS. So, it’s not hard to see how some administrators might conclude that system:authenticated refers to a group of verified users and then decide to use it as an easy method to assign some permissions to all those trusted users. “GKE, in contrast to Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS), exposes a far-reaching threat since it supports both anonymous and full OpenID Connect (OIDC) access,” the Orca researchers said. “Unlike AWS and Azure, GCP’s managed Kubernetes solution considers any validated Google account as an authenticated entity. Hence, system:authenticated in GKE becomes a sensitive asset administrators should not overlook.” The Kubernetes API can integrate with many authentication systems and since access to Google Cloud Platform and all of Google’s services in general is done through Google accounts, it makes sense to also integrate GKE with Google’s IAM and OAuth authentication and authorization system.


Will the Rise of Generative AI Increase Technical Debt?

The rise of generative AI-related tools will likely increase technical debt, both due to the rush to hastily adopt new capabilities and the need to mold AI models to suit specific requirements. “New LLMs and generative AI applications will undoubtedly increase technical debt in the future, or at a minimum, greatly increase the need to manage that debt proactively,” said Quillin. “It starts with new requirements to continually manage, maintain, and nurture these models from a broad range of new KPIs from bias, concept drift, and shifting business, consumer, and environmental inputs and goals,” he said. Incorporating AI may require a significant upfront commitment, leading to additional technical debt. “It won’t be just a build-and-maintain scenario, but rather, the first of many steps on a long road ahead,” said Prince Kohli, CTO of Automation Anywhere. Product companies with a generative AI focus must invest in creating a data and model strategy, a data architecture to work with AI, controls for the AI and more. “Technology disruptions and pivots such as this always lead to this kind of technical debt that must be continually paid down, but it’s the price of admittance,” he said.



Quote for the day:

''The best preparation for tomorrow is doing your best today.'' -- H. Jackson

Daily Tech Digest - January 26, 2024

Why a Chief Cyber Resilience Officer is Essential in 2024

“We'll see the role popping up more and more as an operational outcome within security programs and more of a focus in business. In the wake of the pandemic and macroeconomic conditions and everything, what business leader isn’t thinking about business resilience? So, cyber resilience tucks nicely into that.” On the surface, the standalone CISO role isn’t much different because it serves as the linchpin for securing the enterprise. There are many different flavors of CISO, with some being business-focused, says Hopkins, whose teams take on more compliance tasks as opposed to more technical security operations. Other CISOs are more technical, meaning they’ll monitor threats in the environment and respond accordingly, while compliance is a separate function. However, the stark differences between the two roles lie in the mindset, approach, and target outcome for the scenario. The CCRO’s mindset is “it’s not a matter of if, but when.” So, the CCRO’s approach is to anticipate cyber incidents and make incident response preparations that will mitigate material damage to a business. They act as a lifeline. This approach is arguably the role’s most quintessential attribute. 


How To Sell Enterprise Architecture To The Business

The best way to win buy-in for your enterprise architecture (EA) practice is to know who your stakeholders are and which of them will be the most receptive to your ideas. EA has a broad scope that impacts your entire business strategy beyond just your application portfolio, so you need to adapt your presentations to your audience. Defining the specific parts of your EA practice that matter to each stakeholder will keep your discussion relevant and impactful. Put your processes in the context of the stakeholder's business area and show the immediate value you will create and the structure that you have in place to do so. You can even offer to help install EA processes into other teams' workflows to help improve synergy with their toolsets. Just ensure that you highlight the benefits for them. Explaining to your marketing team how you plan to optimize your organization's finance software is not going to engage them. However, showcasing the information you have on your content management systems and MQL trackers will catch their interest. Once a group of key stakeholders are on-board with your EA practice, you will have a group of EA evangelists and a selection of case studies that you can use to win over more and more stakeholders. 


Quantum Breakthrough: Unveiling the Mysteries of Electron Tunneling

Tunneling is a fundamental process in quantum mechanics, involving the ability of a wave packet to cross an energy barrier that would be impossible to overcome by classical means. At the atomic level, this tunneling phenomenon significantly influences molecular biology. It aids in speeding up enzyme reactions, causes spontaneous DNA mutations, and initiates the sequences of events that lead to the sense of smell. Photoelectron tunneling is a key process in light-induced chemical reactions, charge and energy transfer, and radiation emission. The size of optoelectronic chips and other devices has been close to the sub-nanometer atomic scale, and the quantum tunneling effects between different channels would be significantly enhanced. ... This work successfully reveals the critical role of neighboring atoms in electron tunneling in sub-nanometer complex systems. This discovery provides a new way to deeply understand the key role of the Coulomb effect under the potential barrier in the electron tunneling dynamics, solid high harmonics generation, and lays a solid research foundation for probing and controlling the tunneling dynamics of complex biomolecules.


UK Intelligence Fears AI Will Fuel Ransomware, Exacerbate Cybercrime

“AI will primarily offer threat actors capability uplift in social engineering,” the NCSC said. “Generative AI (GenAI) can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing. This will highly likely increase over the next two years as models evolve and uptake increases.” The other worry deals with hackers using today’s AI models to quickly sift through the gigabytes or even terabytes of data they loot from a target. For a human it could take weeks to analyze the information, but an Al model could be programmed to quickly pluck out important details within minutes to help hackers launch new attacks or schemes against victims. ... Despite the potential risks, the NCSC's report did find one positive: “The impact of AI on the cyber threat will be offset by the use of AI to enhance cyber security resilience through detection and improved security by design.” So it’s possible the cybersecurity industry could develop AI smart enough to counter next-generation attacks. But time will tell. Meanwhile, other cybersecurity firms including Kaspersky say they've also spotted cybercriminals "exploring" using AI programs. 


Machine learning for Java developers: Algorithms for machine learning

In supervised learning, a machine learning algorithm is trained to correctly respond to questions related to feature vectors. To train an algorithm, the machine is fed a set of feature vectors and an associated label. Labels are typically provided by a human annotator and represent the right answer to a given question. The learning algorithm analyzes feature vectors and their correct labels to find internal structures and relationships between them. Thus, the machine learns to correctly respond to queries. ... In unsupervised learning, the algorithm is programmed to predict answers without human labeling, or even questions. Rather than predetermine labels or what the results should be, unsupervised learning harnesses massive data sets and processing power to discover previously unknown correlations. In consumer product marketing, for instance, unsupervised learning could be used to identify hidden relationships or consumer grouping, eventually leading to new or improved marketing strategies. ... The challenge of machine learning is to define a target function that will work as accurately as possible for unknown, unseen data instances. 


How to protect your data privacy: A digital media expert provides steps you can take and explains why you can’t go it alone

The dangers you face online take very different forms, and they require different kinds of responses. The kind of threat you hear about most in the news is the straightforwardly criminal sort of hackers and scammers. The perpetrators typically want to steal victims’ identities or money, or both. These attacks take advantage of varying legal and cultural norms around the world. Businesses and governments often offer to defend people from these kinds of threats, without mentioning that they can pose threats of their own. A second kind of threat comes from businesses that lurk in the cracks of the online economy. Lax protections allow them to scoop up vast quantities of data about people and sell it to abusive advertisers, police forces and others willing to pay. Private data brokers most people have never heard of gather data from apps, transactions and more, and they sell what they learn about you without needing your approval. A third kind of threat comes from established institutions themselves, such as the large tech companies and government agencies. These institutions promise a kind of safety if people trust them – protection from everyone but themselves, as they liberally collect your data.


Pwn2Own 2024: Tesla Hacks, Dozens of Zero-Days in Electrical Vehicles

"The attack surface of the car it's growing, and it's getting more and more interesting, because manufacturers are adding wireless connectivities, and applications that allow you to access the car remotely over the Internet," Feil says. Ken Tindell, chief technology officer of Canis Automotive Labs, seconds the point. "What is really interesting is how so much reuse of mainstream computing in cars brings along all the security problems of mainstream computing into cars." "Cars have had this two worlds thing for at least 20 years," he explains. First, "you've got mainstream computing (done not very well) in the infotainment system. We've had this in cars for a while, and it's been the source of a huge number of vulnerabilities — in Bluetooth, Wi-Fi, and so on. And then you've got the control electronics, and the two are very separate domains. Of course, you get problems when that infotainment then starts to touch the CAN bus that's talking to the brakes, headlights, and stuff like that." It's a conundrum that should be familiar to OT practitioners: managing IT equipment alongside safety-critical machinery, in such a way that the two can work together without spreading the former's nuisances to the latter. 


Does AI give InfiniBand a moment to shine? Or will Ethernet hold the line?

Ethernet’s strengths include its openness and its ability to do a more than decent job for most workloads, a factor appreciated by cloud providers and hyperscalers who either don't want to manage a dual-stack network or become dependent on the small pool of InfiniBand vendors. Nvidia's SpectrumX portfolio uses a combination of Nvidia's 51.2 Tb/s Spectrum-4 Ethernet switches and BlueField-3 SuperNICs to provide InfiniBand-like network performance, reliability, and latencies using 400 Gb/s RDMA over converged Ethernet (ROCE). Broadcom has made similar claims across its Tomahawk and Jericho switch line, which use either data processing units to manage congestion or handling this in the top of rack switch with its Jericho3-AI platform, announced last year. To Broadcom's point, hyperscalers and cloud providers such like AWS have done just that, Boujelbene said. The analyst noted that what Nvidia has done with SpectrumX is compress this work into a platform that makes it easier to achieve low-loss Ethernet. And while Microsoft has favored InfiniBand for its AI cloud infrastructure, AWS is taking advantage of improving congestion management techniques in its own Elastic Fabric Adapter 2 (EFA2) network


The Evolution & Outlook of the Chief Information Security Officer

Beyond mere implementation, the CISO also carries the mantle of education, nurturing a cybersecurity-conscious environment by making every employee cognizant of potential cyber threats and effective preventive measures. As the digital landscape shifts beneath our feet, the roles and responsibilities of the CISO have significantly evolved, casting a larger shadow over the organization’s operations and extending far beyond the traditional confines of IT risk management. No longer confined to the realms of technology alone, the CISO has become an integral component of the broader business matrix. They stand at the intersection of business and technology, needing to balance the demands of both spheres in order to effectively steer the organization towards a secure digital future. ... The increasingly digitalized and interconnected world of today has thrust the role of the Chief Information Security Officer (CISO) into the limelight. Their duties have become crucial as organizations navigate a complex and ever-evolving cybersecurity landscape. Customer data protection, adherence to intricate regulations, and ensuring seamless business operations in the face of potential cyber threats are prime priorities that necessitate the presence of a CISO. 


To Address Security Data Challenges, Decouple Your Data

Why is this a good thing? It can ultimately help you gain a holistic perspective of all the security tools you have in your organization to ensure you’re leveraging the intrinsic value of each one. Most organizations have dozens of security tools, if not more, but most lack a solid understanding or mapping of what data should go into the SIEM solution, what should come out, and what data is used for security analytics, compliance, or reporting. As data becomes more complex, extracting value and aggregating insights become more difficult. When you decide to decouple the data from the SIEM system, you have an opportunity to evaluate your data. As you move towards an integrated data layer where disparate data is consolidated, you can clean, deduplicate, and enrich it. Then you have the chance to merge that data not only with other security data but with enterprise IT and business data, too. Decoupling the data into a layer where disparate data is woven together and normalized for multidomain data use cases allows your organization to easily take HR data, organizational data, and business logic and transform it all into ready-to-use business data where security is a use case. 



Quote for the day:

“If my mind can conceive it, my heart can believe it, I know I can achieve it!” -- Jesse Jackson

Daily Tech Digest - January 25, 2024

Building AI agents with Semantic Kernel

Microsoft’s Semantic Kernel team is building on OpenAI’s Assistant model to deliver one kind of intelligent agent, along with a set of tools to manage calling multiple functions. They’re also providing a way to manage the messages sent to and from the OpenAI API, and to use plugins to integrate general purpose chat with grounded data-driven integrations using RAG. The team is starting to go beyond the original LangChain-like orchestration model with the recent 1.01 release and is now thinking of Semantic Kernel as a runtime for a contextual conversation. That requires a lot more management of the conversation and prompt history used. All interactions will go through the chat function, with Semantic Kernel managing both inputs and outputs. There’s a lot going on here. First, we’re seeing a movement towards an AI stack. Microsoft’s Copilot model is perhaps best thought of as an implementation of a modern agent stack, building on the company’s investment in AI-ready infrastructure (for inference as well as training), its library of foundation models, all the way up to support for plugins that work across Microsoft’s and OpenAI’s platforms.


CISOs’ role in identifying tech components and managing supply chains

A big problem today is that security teams are only involved at the end of a project as part of a “final sign-off” in many organizations. This creates friction between developers and security engineers; both may see the other as the root of the problem: “If these developers only wrote secure code, everyone’s lives would be easier.” and “Oh great, the security team is going to find a bunch of bugs and delay our launch. Again.” Organizations that involve security teams with development during the initial stages of design and scoping and have a few security reviews during the development process allow bugs to be addressed early in the cycle and provide an opportunity for the security team to educate developers on standard insecure coding practices. While no solution is perfect, this approach – adopted by companies like Microsoft in developing HyperV – helps avoid last-minute delays and animosity between the teams. ... Supply chain security needs to be a priority early in the development lifecycle. At the very least, open-source libraries and components should be audited for known vulnerabilities, and it’s worth looking at the vulnerability history of a component.


Navigating the Complexities of AI With a Socially Conscious Lens

The rapid spread of AI technology, while offering significant advantages, has also given rise to several concerning trends. Bias and discrimination inherent in AI systems can replicate and amplify existing societal prejudices, often at the expense of marginalized groups. Privacy erosion, another critical issue, poses risks of surveillance and data misuse. Additionally, the threat of job displacement due to automation, security vulnerabilities, and the ethical concerns posed by AI decision-making in sensitive areas are challenges that require immediate and thoughtful attention. In the context of hiring and recruiting, AI-driven bias is a significant concern. AI models, when trained on biased historical data, can inadvertently perpetuate discrimination, making it harder for certain groups, such as individuals with criminal records, to secure employment. For example, background checks are normally limited to seven years, but an AI model may contain data extending beyond that timeframe. Without proper protections in place, candidates may be flagged for offenses that are older than can legally be considered. This would not only impact individual lives but also reinforce systemic inequalities.


Beyond legal compliance: Timing and path for adoption of privacy preserving data processings and collaborations for value creation

We are already witnessing notable strides in standardising the movement and utilisation of financial and healthcare data through innovations in the Account Aggregator (AA) framework and the Ayushman Bharat Digital Mission (ABDM) healthcare data exchange. The systematic approach fostered by AA and ABDM presents an opportune moment to embed privacy at the heart of system architecture and design. In these ecosystems, Financial Information Users (FIUs) and Healthcare Information Users (HIUs) are particularly vulnerable to risks associated with the handling of users and business data. India stands at a critical juncture, with the potential to revolutionise how data is circulated through such aggregator systems. While these institutions access data streams with user consent, there is a risk of falling into the same conflicts observed in advanced digital economies. The crux of the issue lies in the intricate relationship between consent, data exploitation, and the often opaque interpretation of privacy with consent. Addressing this challenge is essential to avoid replicating the contentious dynamics seen in more mature digital markets and to pave the way for a more transparent, user-centric data ecosystem.


The White House Addresses Responsible AI: AI Safety and Data Privacy

Data privacy advocates in the United States have been working toward comprehensive privacy legislation since the late 1990s. Unlike some other regions, such as the European Union with its General Data Protection Regulation (GDPR), the US lacks a single, overarching law to protect individuals' privacy rights. Right now, over 55 state and federal laws coexist in the United States, offering various levels of privacy protections. Not only is it a nightmare for data breach response and notification, but the inconsistencies do Americans a disservice when it comes to adequately protecting data privacy as it leaves gaps in protection for individuals whose data may be handled differently depending on their location. ... The release of the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” by the Biden administration underscores the importance of legislation that unifies the existing patchwork of regulations, enforcement activities, and penalties under one comprehensive law. As the White House stated in their fact sheet, "AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems."


Entrepreneurship is a marathon

Every business model requires the Right to Win approach. So, what I look for in an entrepreneur is, whether he has this Right to Win attitude. What I look for next is, whether they are long-term entrepreneurs or opportunistic entrepreneurs. Many people want to be entrepreneurs today for the glamour and money in entrepreneurship. Entrepreneurship is not a sprint; it is a marathon with multiple ups and downs. And you should be able to withstand all that. You need to have the temperament to run a marathon. Remember, in the model that I follow now is where I don't run the business; the entrepreneurs run it. I help, I support, but ultimately, they have to run the business. When I looked for an entrepreneur for Bluestone.com, I had in my mind was one who can disrupt the traditional jewellery market with technology. You may wonder what Gaurav Singh Kushwaha, an IIT-Delhi computer science graduate, is doing in jewellery business when he is not a jeweller. It was his ability to design jewellery with the aid of computers and deliver exactly the same thing that attracted me. There is a lot of technology involved in the business.


The Case for ‘Shifting Right’

When we talk about shifting right, it’s not meant to be in place of shift left when it comes to ensuring secure software. Shifting right comes more into focus when you think about deployment. The greatest benefit of shifting right is the ability to see what software will actually look like once deployed while developers are still shaping and creating it. ... There are a myriad of issues that aren’t necessarily caught in the earlier stages of development, meaning that shifting left doesn’t cover everything. CI/CD code-checking can be performed earlier, but that doesn’t always create a full fix for problems. Issues often never even manifest until the software is actually deployed. So, why wouldn’t we check for that too? ... The phrase “shifting right” sounds innately counterintuitive to the shift-left mentality all software developers understand at the offset. But in reality, and in practice, putting these processes together ensures the best possible security and quality of your software. It’s critical to test early and find mistakes under real-world conditions. That way you’re ensuring the same high levels of quality and you’re protected from later issues by understanding how software looks at the end of the development cycle.


The Need for Secure Cloud Development Environments

Some of the popular interactive patterns explored by vendors are peer-coding and the sharing of running applications for review. Peer coding is the ability to work on the same code at the same time by multiple developers. If you have used an online text editor such as Google Docs and shared it with another user for co-editing, peer-coding is the same approach applied to code development. This allows a user to edit someone else's code in her environment. When running an application inside a CDE-based coding environment, it is possible to share the application with any user immediately. In a classic setting, this will require to pre-emptively deploy the application to another server, or share a local IP address for the local device, provided this is possible. This process can be automated with CDEs. CDEs are delivered using a platform that is typically self-hosted by the organization in a private cloud or hosted by an online provider. In both cases, functionalities delivered by these environments are available to the local devices used to access the service without any installation. 


HPE’s corporate emails breached by Russian state-sponsored actor ‘Cozy Bear’

It’s not known if this is part of a coordinated campaign targeting US tech giants, or if it was separate factions within Midnight Blizzard or Cozy Bear working on unique missions. “Beginning in late November 2023, the threat actor used a password spray attack to compromise a legacy non-production test tenant account and gain a foothold, and then used the account’s permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents,” Microsoft said in a blog post disclosing the attack. Password spraying is a brute-force cyberattack where attackers use a common password across many accounts to bypass lockout policies. “The recent Microsoft breach and disclosure brings to the forefront two challenges: no one is immune (even global organizations) from threat actors, and as an organization, it will take time to put any fixes in place,” said Ravi Srinivasan, CEO, of cyber security firm Votiro. “Anytime a threat is detected, it’s costly and time-consuming to remediate.”


Agent Swarms – an evolutionary leap in intelligent automation

During the rapid evolution of AI, there emerges a concept that promises to redefine the very essence of automation. Agent Swarms, inspired by the remarkable collective behaviors of nature’s most efficient creatures, are poised to revolutionize our approach to complex problem-solving. As AI accelerates at a breakneck pace, the urgency to harness the potential of Agent Swarms becomes increasingly apparent. These autonomous software agents, working collaboratively in a decentralized fashion, are not just a technological marvel; they are an imperative response to the escalating complexity of today’s challenges. In a world where healthcare, finance, urban planning, agriculture, and countless other sectors grapple with ever more intricate issues, the demand for intelligent automation that can adapt and excel has never been more pressing. Agent Swarms, with their capacity for decentralized control and collective intelligence, and their promise of autonomous decision-making – have emerged as the answer to this urgent call. We humbly acknowledge our journey as thought leaders and practitioners in intelligent automation and AI. 



Quote for the day:

"Success is nothing more than a few simple disciplines, practiced every day." -- Jim Rohn

Daily Tech Digest - January 24, 2024

8 data strategy mistakes to avoid

Denying business users access to information because of data silos has been a problem for years. When different departments, business units, or groups keep data stored in systems not available to others, it diminishes the value of the data. Data silos result in inconsistencies and operational inefficiencies, says John Williams, executive director of enterprise data and advanced analytics at RaceTrac, an operator of convenience stores. ... Data governance should be at the heart of any data strategy. If not, the results can include poor data quality, lack of consistency, and noncompliance with regulations, among other issues. “Maintaining the quality and consistency of data poses challenges in the absence of a standardized data management approach,” Williams says. “Before incorporating Alation at RaceTrac, we struggled with these issues, resulting in a lack of confidence in the data and redundant efforts that impeded data-driven decision-making.” Organizations need to create a robust data governance framework, Williams says. This involves assigning data stewards, establishing transparent data ownership, and implementing guidelines for data accuracy, accessibility, and security.


Regulators probe Microsoft’s OpenAI ties — is it only smoke, or is there fire?

Given that some of the world’s most powerful agencies regulating antitrust issues are looking into Microsoft’s relationship with OpenAI, the company has much to fear. Are they being fair, though? Is there not just smoke, but also fire? You might argue that AI — and genAI in particular — is so new, and the market so wide open, that these kinds of investigations are exceedingly preliminary, would only hurt competition, and represent governmental overreach. After all, Google, Facebook, Amazon, and billion-dollar startups are all competing in the same market. That shows there’s serious competition. But that’s not quite the point. The OpenAI soap opera shows that OpenAI is separate from Microsoft in name only. If Microsoft can use its $13 billion investment to reinstall Altman (and grab a seat on the board), even if it’s a nonvoting one, it means Microsoft is essentially in charge of the company. Microsoft and OpenAI have a significant lead over all their competitors. If governments wait too long to probe what’s going on, that lead could become insurmountable. 


AI will make scam emails look genuine, UK cybersecurity agency warns

The NCSC, part of the GCHQ spy agency, said in its latest assessment of AI’s impact on the cyber threats facing the UK that AI would “almost certainly” increase the volume of cyber-attacks and heighten their impact over the next two years. It said generative AI and large language models – the technology that underpins chatbots – will complicate efforts to identify different types of attack such as spoof messages and social engineering, the term for manipulating people to hand over confidential material. “To 2025, generative AI and large language models will make it difficult for everyone, regardless of their level of cybersecurity understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.” Ransomware attacks, which had hit institutions such as the British Library and Royal Mail over the past year, were also expected to increase, the NCSC said. It warned that the sophistication of AI “lowers the barrier” for amateur cybercriminals and hackers to access systems and gather information on targets, enabling them to paralyse a victim’s computer systems, extract sensitive data and demand a cryptocurrency ransom.


Burnout epidemic proves there's too much Rust on the gears of open source

An engineer is keen to work on the project, opens up the issue tracker, and finds something they care about and want to fix. It's tricky, but all the easy issues have been taken. Finding a mentor is problematic since, as Nelson puts it, "all the experienced people are overworked and burned out," so the engineer ends up doing a lot of the work independently. "Guess what you've already learned at this point," wrote Nelson. "Work in this project doesn't happen unless you personally drive it forward." The engineer becomes a more active contributor. So active that the existing maintainer turns over a lot of responsibilities. They wind up reviewing PRs and feeling responsible for catching mistakes. They can't keep up with the PRs. They start getting tired ... and so on. Burnout can manifest itself in many ways, and dodging it comes down to self-care. While the Rust Foundation did not wish to comment on the subject, the problem of burnout is as common – if not more so – in the open source world as it is in the commercial one.


Steadfast Leadership And Identifying Your True North

When you apply the idea of true north across all facets of your organization, you can effectively keep your team aligned and moving in tandem. But without a clear and definitive direction, there’s no way to gauge whether everyone is rowing in the same direction. Clarifying your distinct true north is just the beginning. Once it’s established, team members at all levels, especially leadership, must understand it, refer to it often, and measure performance against it. This looks like continuously reviewing departmental metrics and the attitudes of teams and individuals to ensure that they are in alignment with the organization’s cardinal direction. Leaders must be able to see the connection between the processes and goals of individual teams and how they contribute to or inhibit long-term goals. If individuals or teams work against the desired direction (sometimes unknowingly!), it can slow or, in some cases, even reverse progress. The antidote is long-term alignment, but this can only come after a deep understanding of how the day-to-day affects long-term success, which requires accurate metrics, widespread accountability, and thorough analysis.


The Rise of the Serverless Data Architectures

The big lesson is that there is no free lunch. You have to understand the tradeoffs. If I go with Aurora, I have to not think about some things, I have to think about other things. Transactions are not an issue. Cold start is an issue, minimum payment may be an issue. If I go with something like DynamoDB, then things are perfect, but I have a key-value store. There's all kinds of things to take into consideration and make sure that you understand what each system is actually capable of delivering. The one thing to note is that while you will have to make tradeoffs, if you decide you want a very elastic system, look at the situation. It does not require changing the whole way you ever use the database. Meaning if you like key-value stores, there will be several for you to choose from. If you like relational, there will be a bunch. If you like specific type of relational, and MySQL fans will be. If you like Postgres, there are going to be. You don't have to change a lot about your worldview. This is not the same case if you try serverless functions, which is, learn a whole new way to write code and manage code and so on, because I'm still trying to wrap my head around how to build functionality from a lot of small independent functions.


Navigating Generative AI Data Privacy and Compliance

Developers play a crucial role in protecting companies from the legal and ethical challenges linked to generative AI products. Faced with the risk of unintentionally exposing information (a longstanding problem) or now having the generative AI tool leak it on its own (as occurred when ChatGPT users reported seeing other people’s conversation histories), companies can implement strategies like the following to minimize liability and help ensure the responsible handling of customer data. ... Using anonymized and aggregated data serves as an initial barrier against the inadvertent exposure of individual customer information. Anonymizing data strips personally identifiable elements so that the generative AI system can learn and operate without associating specific details with individual users. ... Through meticulous access management, developers can restrict data access exclusively to individuals with specific tasks and responsibilities. By creating a tightly controlled environment, developers can proactively reduce the likelihood of data breaches, helping ensure that only authorized personnel can interact with and manipulate customer data within the generative AI system.


The Intersection of DevOps, Platform Engineering, and SREs

By automating manual processes, embracing continuous integration and continuous delivery (CI/CD), and instilling a mindset of shared responsibility, DevOps empowers teams to respond swiftly to market demands, ensuring that software is not just developed but delivered efficiently and reliably. Platform Engineering emerges as a key player in shaping the infrastructure that underpins modern applications. It is the architectural foundation that supports the deployment, scaling, and management of applications across diverse environments. The importance of Platform Engineering lies in providing a standardized, scalable, and efficient platform for development and operations teams. By offering a set of curated tools, services, and environments, Platform Engineers enable seamless collaboration and integration of DevOps practices. ... The importance of SREs lies in their dedication to ensuring the reliability, scalability, and high performance of systems and applications. SREs introduce a data-driven approach, defining service level objectives (SLOs) and error budgets to align technical operations with business objectives.


Quantum-secure online shopping comes a step closer

The researchers’ QDS protocol involves three parties: a merchant, a client and a third party (TP). It begins with the merchant preparing two sequences of coherent quantum states, while the client and the TP prepare one sequence of coherent states each. The merchant and client then send a state via a secure quantum channel to an intermediary, who performs an interference measurement and shares the outcome with them. The same process occurs between the merchant and the TP. These parallel processes enable the merchant to generate two keys that they use to create a signature for the contract via one-time universal hashing. Once this occurs, the merchant sends the contract and the signature to the client. If the client agrees with the contract, they use their quantum state to generate a key in a similar way as the merchant and send this key to the TP. Similarly, the TP generates a key from their quantum state after receiving the contract and signature. Both the client and the TP can verify the signature by calculating the hash function and comparing their result to the signature. 


The Top 10 Things Every Cybersecurity Professional Needs to Know About Privacy

The intersection between privacy and cybersecurity is ever increasing and the boundaries between the two ever blurring. By way of example – data breaches lived firmly in the realm of cybersecurity for many years. However, since the adoption of GDPR and mandatory disclosure requirements of several data protection and privacy laws around the world, the balance of responsibility and ownership of data breaches has become blurred. ... the language of privacy is very different from that of cybersecurity – cybersecurity professionals talk about penetration tests, vulnerability assessments, ransomware attacks, firewalls, operating systems, malware, anti-virus, etc. Meanwhile, privacy professionals talk about data protection impact assessments, case law judgements, privacy by design and default, legitimate interest assessments, proportionality, etc. In fact, the language of privacy is not even consistent in its own right, with much confusion between the fundamental differences between data protection and privacy and its definitions across jurisdictions.



Quote for the day:

"Leaders should influence others in such a way that it builds people up, encourages and edifies them so they can duplicate this attitude in others." -- Bob Goshen