Daily Tech Digest - February 18, 2023

Oracle outages serve as warning for companies relying on cloud technology

“Oracle engineers identified a performance issue within the back-end infrastructure supporting the OCI Public DNS API, which prevented some incoming service requests from being processed as expected during the impact window,” the company said on its cloud infrastructure website. In an update, the company said it implemented "an adaptive mitigation approach using real-time backend optimizations and fine-tuning of DNS Load Management to handle current requests." Oracle said that the outage caused a variety of problems for customers. OCI customers using OCI Vault, API Gateway, Oracle Digital Assistant, and OCI Search with OpenSearch, for example, may have received 5xx-type error or failures (which are associated with server problems), Oracle said. Identity customers may have experienced issues when creating and modifying new domains. In addition, Oracle Management Cloud customers may have been unable to create new instances or delete existing instances, Oracle said. Oracle Analytics Cloud, Oracle Integration Cloud, Oracle Visual Builder Studio, and Oracle Content Management customers may have encountered failures when creating new instances.


EU parliamentary committee says 'no' to EU-US data privacy framework

In particular, the committee noted, the executive order is too vague, and leaves US courts — who would be the sole interpreters of the policy — wiggle room to approve the bulk collection of data for signals intelligence, and doesn’t apply to data accessed under US laws like the Cloud Act and the Patriot Act. The parliamentary committee's major points echoed those of many critics of the deal in the EU, as well as the criticsm of the American Civil Liberties Union (ACLU), which has said that the US has failed to enact meaningful surveillance reform. ... In short, the committee said that US domestic law is simply incompatible with the GDPR framework, and that no agreement should be reached until those laws are more in alignment. The committee’s negative response this week to the proposed data privacy framework, however, was a nonbinding draft resolution and though it is a sticking point, does not put a formal halt to the adoption process, as its approval was not required to move the agreement along.


How edge devices and infrastructure will shape the metaverse experience

Cloud-native edge infrastructure can address these shortcomings and provide optimized service chaining. It can handle a tremendous amount of data processing while delivering cost-effective, terabit-scale performance and reduced power consumption. In doing so, edge computing can move past closed networking models to meet the demanding data processing requirements of the metaverse. “Edge computing allows data to be processed at or near the data source, implying that commands and processes will occur promptly. As the metaverse will require massive data simultaneously, processing data quickly and seamlessly depends on proximity,” Prasad Joshi, SVP and head of emerging technology solutions at Infosys, told VentureBeat. “Edge computing offers the ability to process such information on a headset or on the device, thereby making that immersive experience much more effective.” ... The power, space and cooling limitations of legacy architecture further exacerbate this data surge. While these challenges impact consumer-based metaverse applications, the stakes are much higher for enterprise use cases.


The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter

It’s not a Skynet-level supercomputer that can manipulate the real world. ... Those feats are impressive. But combined with what appears to be an unstable personality, a capacity to threaten individuals, and an ability to brush off the safety features Microsoft has attempted to constrain it with, that power could also be incredibly dangerous. Von Hagen says he hopes that his experience being threatened by Bing makes the world wake up to the risk of artificial intelligence systems that are powerful but not benevolent—and forces more attention on the urgent task of “aligning” AI to human values. “I’m scared in the long term,” he says. “I think when we get to the stage where AI could potentially harm me, I think not only I have a problem, but humanity has a problem.” Ever since OpenAI’s chatbot ChatGPT displayed the power of recent AI innovations to the general public late last year, Big Tech companies have been rushing to market with AI technologies that, until recently, they had kept behind closed doors as they worked to make them safer.


Machines Are Dreaming Instead of Learning

The question is—how much of the ‘data problem’ is about the quantity versus the quality of data? To deal with this data scarcity or quantity, people are moving away from accessing and using real data towards using synthetic data. In a nutshell, synthetic data is artificially generated data, either mathematically or statistically, which appears close to real-world data. This also increases the amount of data which, in turn, increases the accuracy of each model and removes all the existing flaws in the data. There are many positive reasons to be attracted towards synthetic data such as data privacy. ... One of the reasons that synthetic data is on the rise is to tackle the bias that is present in smaller datasets. Even though larger datasets can have poor quality data—which would require higher fine-tuning and heavier workloads—synthetic data does not represent the quality and the amount of variability that is present within real-world data. Synthetic data is generated using algorithms that model the statistical properties of real data.


Making Microservices Just the Right Size

By attempting to make smaller and simpler services, applications have become more complex. The smaller service size is a great benefit to the individual development team that owns that service, but the complex interconnection between services has made the overall system architecture more involved. We’ve essentially moved the complexity uphill. Rather than individual developers dealing with complexity at the code level, system architects deal with the complexity at the system level. Thus, services that are too large are difficult to build and understand at scale. Services that are too small simply move the complexity up to the system level. The goal, therefore, is to find the right size. It’s like the story of Goldilocks and the Three Bears; finding the right size for your services is challenging, and often involves trial and error. It’s easy to build them too big or too small. Finding the Goldilocks size can be challenging. How do you find the Goldilocks size for your microservices? The answer depends a lot on your organization and your application.


4 Ways To Be A Learning Leader

Constant curiosity makes learning simply part of you and your way of being. If you're motivated and hungry to improve your skills and knowledge, you'll learn more successfully. Professor and researcher Francesca Gino wrote, “When our curiosity is triggered, we think more deeply and rationally about decisions and come up with more-creative solutions.” Additionally, developing and demonstrating a genuine interest in people and their perspectives and interests enriches all your relationships. Start by asking yourself what you're curious about, then think about all the topics that extend from that. If this still feels hard, set an intention to ask one other-oriented question per meeting or interaction. We all consume and digest information and learning differently. Think about how you prefer to learn in given contexts. For example, do you like to just go for it? Do you like talking to other leaders, coaches or mentors? Maybe you like podcasts or reading books and articles. Discover what works best for your learning.


Malware authors leverage more attack techniques that enable lateral movement

"An increase in the prevalence of techniques being performed to conduct lateral movement highlights the importance of enhancing threat prevention and detection both at the security perimeter as well as inside networks," researchers from cybersecurity firm Picus, said in their report. Many years ago lateral movement used to be associated primarily with advanced persistent threats (APTs). These sophisticated groups of attackers are often associated with intelligence agencies and governments, whose primary goals are cyberespionage or sabotage. To achieve these goals these groups typically take a long time to understand the network environments they infiltrate, establish deep persistence by installing implants on multiple systems, they identify critical servers and sensitive data stores and try to extract credentials that gives them extensive access and privilege escalation. APTs also used to operate in a targeted manner, going to specific companies from specific industries that might have the secrets their handlers are looking for.


The cost and sustainability of generative AI

More demand for AI means more demand for the resources these AI systems use, such as public clouds and the services they provide. This demand will most likely be met with more data centers housing power-hungry servers and networking equipment. Public cloud providers are like any other utility resource provider and will increase prices as demand rises, much like we see household power bills go up seasonally (also based on demand). As a result, we normally curtail usage, running the air conditioning at 74 degrees rather than 68 in the summer. However, higher cloud computing costs may not have the same effect on enterprises. Businesses may find that these AI systems are not optional and are needed to drive certain critical business processes. In many cases, they may try to save money within the business, perhaps by reducing the number of employees in order to offset the cost of AI systems. It’s no secret that generative AI systems will displace many information workers soon.


6 quantum computing questions IT needs to ask

The challenge is the older systems' data format and fields may not be compatible with newer systems. In addition, the fields and tables might not contain what you'd expect. There is also the complexity of free text fields that store keywords. Do not underestimate the challenge of making existing data available for quantum application to work with. ... The important question in developing quantum applications is finding tools that can provide a 10-year lifespan with guaranteed software support. There are many open source tools for quantum-based application development. A company could take on one (or more) open source projects, but this can be a challenge and a costly commitment. The issue is not only keeping your software up to date (and retaining staff to develop it) but also to develop quantum software that's compatible with the rest of your IT environment. When considering lifespan, consider abandoned open source projects for quantum software applications.



Quote for the day:

"Leadership is an opportunity to serve. It is not a trumpet call to self-importance." -- J. Donald Walters

Daily Tech Digest - February 17, 2023

Bard, Bing, and the 90% problem

With search in particular, accuracy and thoroughness matter. One simple answer is fine — when it’s right. And when you can trust that it’s right. But it certainly seems like right now, that’s anything but the case with any of this technology. Hell, Microsoft's Bing-bot includes prominent disclaimers that it’s likely to provide inaccurate or incomplete information! And all novelty and cool factor aside, I just don’t see how that’ll make for an especially useful utility from a search context, for as long as that remains the case. ... It's really quite simple: If even one out of every 10 attempts at using something produces a flawed or for any reason unsatisfactory result, folks tend to lose faith in said thing pretty fast. And they then end up turning to another tool for the same purpose more often than not. That's why lots of us rely on Assistant for functional commands, which work fairly consistently — but when it comes to more complex searches, whether we've got Assistant at our beck and call on a phone or built into the core system interface on a Chromebook, we're still more likely to go to Google to get an answer.


EaaS as a Technique to Raise Productivity in Teams

EaaS can help you provide your application in a staging environment. Essentially, this environment is a copy of your production environment. EaaS tools simply assist you with duplicating the production environment and all of its elements (e.g., the codes, settings, and deployment configurations). These technologies enable you to quickly create these environments for your clients, providing them with a trial version of your software. Consequently, even before the application is finished, you may present your products to clients more quickly. EaaS also allows developers to be more creative by constructing settings similar to sandboxes in which they can experiment with new ideas without having to set up new setups or recreate current ones. The EaaS approach is scalable and cost-effective. Only the resources you use and the time your server is online are subject to payment. So, if you need to submit a proof of concept to a stakeholder, you just need to pay for the time the environment will be operational.


Fraudsters are using machine learning to help write scam emails in different languages

Scammers don't even need to speak the language of the people or organizations they're targeting: analysis of some prolific BEC campaigns by researchers at Abnormal Security suggests that email fraudsters are turning to machine learning-powered translation tools like Google Translate to help compose emails used in the attacks. This technique is enabling widespread BEC campaigns for an expanded array of cyber-criminal groups, who can cast a larger net at minimal cost. "Attacking targets across various regions and using multiple languages is nothing new. However, in the past, these attacks were perpetrated mainly by sophisticated organizations with bigger budgets and more advanced resources," said Crane Hassold, director of threat intelligence at Abnormal Security. ... The payment fraud campaigns have been distributed in at least 13 different languages, including Danish, Dutch, Estonian, French, German, Hungarian, Italian, Norwegian, Polish, Portuguese, Spanish, and Swedish.


Don’t Let a Cyberattack Destroy Your Pharmacy

One mistake that many independent pharmacies make is to use free Gmail addresses to transmit sensitive data, Mr. Gallagher added. The email service is not encrypted or secure, he stressed, which is why a better option is to use a private domain for company email. Similarly, he added, it’s important to choose HIPAA-compliant videoconferencing software, such as Microsoft Teams, for discussions with patients and internal meetings. Sloppy data disposal practices are another concern. “What we’ve learned from previous breaches that have happened at pharmacies is that whether it’s paper or whether it’s electronic, it’s really a good idea to ensure that the information is responsibly and securely disposed of,” said Lee Kim, JD, the senior principal of cybersecurity and privacy at the Healthcare Information and Management Systems Society, who wasn’t a presenter at NASP. “How many of us actually think, ‘Well, maybe I should ensure that everything is wiped from the photocopier before it gets serviced’? Probably not many, but if you don’t think about the small transactional things like that … people’s information is at risk.”


States sketch out roadmaps for zero trust ‘journey’

“Money doesn't solve every problem, and endless amounts of money would not instantly create a perfect world where every state has zero trust fully implemented in a very mature way,” Pugh said. “But it would help those states that are very budget strapped and have many competing priorities.” One way of assessing how far along states are in implementing zero trust is whether it is “top of mind in security conversations,” said Jim Richberg, public sector field CISO and vice president of information security at Fortinet. And by that measure, state leaders are paying attention. Those that have led the way on state-level zero trust said guidance already exists from the likes of the National Institute of Standards and Technology’s Authenticator Assurance Levels and Identity Assurance Levels. With those guidelines in place, said Adam Ford, Illinois’ chief information security officer during a National Governors’ Association webinar, states can establish a baseline for themselves, even though the system nationwide is set up so we are "50 experiments going on at the same time," he said.


Don't put off data minimization

From a risk-based perspective, the biggest exposure is in relation to cyberattack. This is a particular threat for law firms because cybercriminals now include you on a shortlist of prime targets. The ABA’s cybersecurity report in 2021 observed that ransomware, in particular, is: “an increasing threat to lawyers and law firms of all sizes”. Microsoft revealed that state-sponsored Chinese hackers have been targeting “US-based universities, defense contractors, law firms and infectious disease researchers”. A lack of systematic data minimisation increases your attractiveness to such criminals because you present a larger, juicier target. Moreover, cyberattack can be your biggest nightmare. It incurs lost productivity and may entail ransom demands. You’ll likely need to pay cybercrime expert fees, and potentially regulatory and professional fines. But that’s not all. A New York based entertainment law firm suffered an attack in 2020 when hackers demanded a ransom payment of USD$42 million to prevent the release of confidential information about the firm’s world-famous clients. News outlets subsequently reported that the firm eventually paid out USD$365k. And there’s the rub. 


CIO role: 4 ways to do more with less

Even the best CIOs can fall victim to a common efficiency-robbing habit: getting lost in the weeds on a particular project. As CIO, you have a lot on your plate, and it’s easy to miss deadlines or deliver sub-par performance if you get too focused on details your team can – and should – handle. Assuming you have a competent, trustworthy team, let go of more minor details and remain laser-focused on your organization’s desired strategic outcomes. When CIOs feel compelled to control every detail, it can indicate a struggling organization. If a business’ IT arm is bogged down by legacy systems or an outpouring of manual and rote tasks that do nothing for business performance, the CIO will often be mired in dealing with organizational performance issues. That means more time managing internal fire drills and less time thinking strategically and making business-critical decisions. ... When you have the confidence and infrastructure to delegate details to your team, you’ll have much more bandwidth to focus on the big picture and drive your business forward.


Navigating the ever-changing landscape of digital security solutions

We see an increasingly fragmented geopolitical landscape with unique data residency requirements for each country which is resulting in localized hosting of solutions as well as nimbleness and increased granularity of data control. Regulations like GDPR and CCPA necessitate the need for not only safeguarding information (via encryption and tokenization) but also driving automated protection of PII. Recent regulations from the White House and guidance from CISA are aimed at driving better compliance with incident disclosure as well as offering a blueprint for zero trust. ... Most progressive organizations view cybersecurity as business critical and partner with organizations like ours to create a comprehensive cybersecurity strategy. In short, while there is increased oversight, both the consumers and providers of security solutions are more focused on: implementing a zero-trust approach, instituting automated protection of information and taking a partnership posture as opposed to a traditional vendor-buyer approach.


Cybersecurity Jobs Remain Secure Despite Recession Fears

"With reports of job cuts at organizations including Twitter, Meta, Microsoft, Amazon and Google, cybersecurity staff could benefit from proactive hiring targeted towards those recent layoffs," the report stated. "With so many tech jobs impacted by recent layoffs, it is possible that many of those individuals may find opportunity in pursuing a career in cybersecurity, where they can apply related skills and expertise." The resilience in demand for cybersecurity professionals comes as many workers burned out and resigned, part of the Great Resignation in 2022. Organizations that lost valuable specialists did so for three main reasons, Rosso says. Cybersecurity teams have traditionally not had great career advancement opportunities, so their ability to gain promotions and increased salaries at their current company are often limited. In addition, the culture surrounding many security teams has often led to burnout and mental stress, she says. "We know, for example, that at the end of 2021 and beginning of 2022, the Log4j issue was causing people to clock a lot of hours, and that led to some burnout," she says. 


Why Your Organization Needs to Embrace Data Resiliency

Enterprises should take a holistic approach to understanding their data: how it's gathered, how it's used throughout the organization, and how it's impacted by a lack of availability or corruption, Krishnamoorthy says. “This starts with creating a detailed map of business processes, applications, systems, and data,” he suggests. Schick notes that there's no industry-standard checklist for ensuring data resiliency, but advises separating critical and non-critical data, storing data in separate locations, logging transactions that change critical data, and using tools and processes to quickly recover corrupted or lost data. Enterprises should retain data only for as long as it's needed, O'Hern suggests. “We eliminate risk when we purge … which means it no longer exists to be held hostage.” Krishnamoorthy notes that it's also important to understand how applications, automated tools and systems, and IT staff interact with enterprise data from manageability, serviceability, and security perspectives. 



Quote for the day:

"Nothing is so potent as the silent influence of a good example." -- James Kent

Daily Tech Digest - February 16, 2023

Eyes on Data: The Elevated Role of Data Management and the CDO

As data becomes more prevalent for every single employee of every organization, it is imperative that organizations go beyond data governance to develop a strong data-driven culture. The importance of a data-driven culture was identified as a key factor in overall success. Data culture starts at the top. Senior executives must establish a data mindset across the firm, emphasizing the importance of a sound data management discipline. Getting the most out of an organization’s data means investing in the programs that support it and the people who are tasked with using it to ensure strong data awareness and literacy. Without a focus on data literacy, organizations are at risk of coming up short in achieving their objectives. ... Today’s data management professionals are assuming more and more responsibility for the public’s data. It is critical, therefore, that firms take responsibility for the ethical access and use of this data and do everything they can to avoid unintentional outcomes due to poor data quality, lack of data analytic model governance, or hidden data biases. 


What are the biggest challenges organizations face in their digital transformation efforts?

The leadership should give a big safety net to everyone by saying ‘Hey, we are going on this journey, we are going to learn a lot and, if you fail, if you have issues, that’s okay. We’ll cover you, we are on your side, let’s just go through this learning journey first.’ So give that safety net for everyone. At the same time, provide some kind of framework for learning. You can’t just say to a whole organization we are going to be transforming ourselves and we are going to be DevOps-enabled and just leave it at that. You should have a program, and some kind of learning mechanism, and probably some outside training if that is needed. You should have days set aside, maybe even give employees 80% of the time to do normal work but 20% to learn something new. This framework of learning and enabling is really important for people to upskill themselves. Think in a different way and basically be happy about the journey that they are on because once people are motivated and happy, then a bunch of stuff starts happening.


Operational Resilience: More than Disaster Recovery

The broader focus of operational resilience requires organisation-wide participation. You cannot simply leave it to a single department or team. Instead, everyone needs to be involved, from executives and the board of directors to individual employees in multiple departments. In today’s climate, it’s not just your own organisation that’s under threat. Your suppliers, partners, and vendors are targets, too. If a major supplier is compromised or taken down, your business might go down with them. Leadership needs to understand risk and to know the risk tolerance and risk appetite of the company. That even includes things such as procurement functions and agreements with third-party suppliers. Resilience must be built into everything down to every-day workflows, and if a single supplier is insufficient to manage risk, then diversity of supply is a must. There are many cases where a cyber event at a supplier rendered multiple organisations unable to fulfil their business outcomes. ... In the context of operational resilience, every risk management scenario and process must consider the supply chain.


Helping users and organizations build an instinctive data privacy habit

Although there’s no one-size-fits all measurement metric for engagement, there are a few guidelines we typically use. The first is tracking the level of engagement and participation from our stakeholders, including consumers, businesses, and other non-profit organizations we partner with. We look at the number of events and activities hosted (e.g., webinars, panel discussions, etc.) and how well attended those are, as well as media/social media mentions as a bellwether for gauge reach and impact of the campaign. Additionally, we also measure success by the extent to which we’ve achieved the objectives and goals we set at the beginning of each campaign. It’s an abstract metric, but for Data Privacy Week, our mission is to raise awareness and educate individuals and organizations about the importance of protecting personal information and championing transparency by companies relative to how they collect, store, and use consumer data. 


6 things ChatGPT can't do (and another 20 it refuses to do)

It won't write about anything after 2021 - As an AI language model, I was trained on a large dataset of text that only goes up to 2021. This means that I don't have access to information or events that occurred after that date. While I have access to a lot of historical data and information, I don't have access to current events, news, or updates that have occurred since my training data cutoff date. ... It won't predict future results of sports events or political contests - As an AI language model, I can process and analyze large amounts of data, including historical data on sports events and political contests. However, making accurate predictions for these types of events is not always straightforward. While historical data can be used to train machine learning models to identify patterns and correlations, it cannot account for unforeseeable events, changes in circumstances, or new variables that may impact the outcome. Additionally, humans make decisions and can change their behavior based on many factors, making it difficult to predict the outcome of an event with complete certainty.


8 signs you’ve mistimed a major IT initiative

Indecision and a poor risk appetite are two common reasons why CIOs frequently delay an important IT initiative. “Nimble, forward-looking leaders put execution speed at the center of performance discussions; indecisive IT leaders tend to rely on consensus decisions and endless risk evaluations,” says Colm Sparks-Austin, president and managing director of business advisory firm Capgemini Canada. To correctly time a major IT initiative, the decision-maker should align the initiative with business goals. “If the business isn’t spearheading the initiative, or is not aware of it, it’s clear that something is wrong,” Sparks-Austin says. CIOs should also ensure they’re analyzing all IT spend through a business goals lens, Sparks-Austin advises.  ... Unrealistic funding almost always plays an important role in initiative timing, observes Ravi Malick, CIO at cloud-based content management, collaboration, and file-sharing tool provider Box. Overly optimistic funding is almost always a main part of the equation when an initiative fails, he notes. 


How to make progress on managing unstructured data

“As the CIO, your job is to be able to provide the information a business needs in order to make decisions,” Minetola said. “The ability to now see into that 80 per cent of the data and make decisions based off that . . . is significant.” ... When thinking about all the data sources an organisation needs to grapple with as part of its transformation, it makes sense. For instance, consider a bank with thousands of computer systems in over a hundred countries. “You need technologies that close silos,” Evelson said. “Whenever we talk about digital transformation, data and analytics platforms that unify everything that I just talked about, like search-powered technologies, are at the top of everyone’s mind.” ... Search-powered technology should bring two critical capabilities to the table: a visualisation layer and machine learning. Visualisation improves the ability to extract insights from large volumes of data. “It’s one thing to be able to have data,” Minetola said. “It’s another thing to understand it.” Furthermore, machine learning such as natural language processing or vector search can help join data sources to create more relevance and context.


What Ukraine's IT Industry Can Teach CIOs About Resilience

The agile, remote structure refined during the pandemic has served Ukrainian IT companies well as they operate using a hybrid workforce -- some employees live abroad, some are on the move due to Russian attacks, and others serve in the military. Unlike traditional industries, many IT jobs are service-oriented. “​​All you need is a computer, Internet, and electricity. You can literally work from anywhere,” Kavetskyi says. Both companies and individuals have engaged in a sustained process of business continuity planning. Now, most organizations have it down to a science. “They have power generators in their offices and Starlinks,” Kavetskyi claims. He emphasizes the power of knowledge sharing: “The IT clusters helped small and medium-sized companies implement basic continuity plans. Everyone working in this industry had a chance to see what others were doing.” “Of course, there was data that couldn't be shared,” he adds. “But in general, big companies were willing to [share their strategies]. Mainly, we had to find time to organize those meetings, considering the logistical challenges.”


Soft skills: How well-rounded IT pros can push your business forward

With organizational spend under greater scrutiny, it’s critical for every new hire you onboard to add value to the business. Productivity and technical skills are paramount in demonstrating resource value. But when you have two candidates with comparable technical skills, you need to consider the value each person’s soft skills bring to the table. ... Soft skills impact how teams communicate, collaborate, and problem-solve, and these capabilities determine the success of your IT projects and client relationships – and, ultimately, your organizational culture. Company culture also plays a crucial role in your brand reputation: You want clients and job candidates to view your team as pragmatic, business-minded problem solvers and communicators. So as non-technical skills continue to play a critical role in the IT arena, it’s time to reconsider the qualities you search for and foster in employees. Skills tests like coding problems and design scenarios make it relatively easy to gauge an applicant’s technical skills. 


Evolving cyberattacks, alert fatigue creating DFIR burnout, regulatory risk

Magnet Forensics’ respondents generally agreed that addressing the burnout and alert fatigue facing DFIR professionals is hampered by recruiting and hiring challenges as well as onboarding difficulties and a lack of automation. Increased investment in automation would be “highly” or “extremely” valuable for a range of DFIR functions including the remote acquisition of target endpoints and the processing of digital evidence, half of respondents said. However, while automation such as security orchestration, automation, and response (SOAR) is already in place in many SOCs, those solutions orchestrate and automate cybersecurity runbooks by taking telemetry, enforcing actions and using other tools, the report noted. “While important for threat containment and remediation, these runbook-related activities are distinct from those performed by digital forensics automation solutions, which execute a data transformation pipeline by orchestrating, automating, performing, and monitoring forensic workflows,” it added.



Quote for the day:

"Take time to deliberate; but when the time for action arrives, stop thinking and go in." -- Andrew Jackson

Daily Tech Digest - February 15, 2023

What is generative AI and why is it so popular?

All it refers to is AI algorithms that generate or create an output, such as text, photo, video, code, data, and 3D renderings, from data they are trained on. The premise of generative AI is to create content, as opposed to other forms of AI, which might be used for other purposes, such as analysing data or helping to control a self-driving car. ... Machine learning refers to the subsection of AI that teaches a system to make a prediction based on data it's trained on. An example of this kind of prediction is when DALL-E is able to create an image based on the prompt you enter by discerning what the prompt actually means. Generative AI is, therefore, a machine-learning framework. ... Generative AI is used in any algorithm/model that utilizes AI to output a brand new attribute. Right now, the most prominent examples are ChatGPT and DALL-E, as well as any of their alternatives. Another example is MusicLM, Google's unreleased AI text-to-music generator. An additional in-development project is Google's Bard.


openIDL: The first insurance Open Governance Network and why the industry needs It

To date, openIDL’s member community includes carrier premiere members: Travelers, The Hartford, The Hanover, and Selective Insurance; state regulator and DOI members; infrastructure partners; associate members; and other non-profit organizations, government agencies, and research/academic institutions. openIDL’s network is built on Hyperledger Fabric, an LF distributed ledger software project. Hyperledger Fabric is intended as a foundation for developing applications or solutions with a modular architecture. The technology allows components, such as consensus and membership services, to be plug-and-play. Its modular and versatile design satisfies a broad range of industry use cases and offers a unique approach to consensus that enables performance at scale while preserving privacy. For the last few years, a running technology joke has been “describe your problem, and someone will tell you blockchain is the solution.” As funny as this is, what’s not funny is the truth behind the joke, and the insurance industry is certainly one that fell head over heels for the blockchain hype. 


Self-healing endpoints key to consolidating tech stacks, improving cyber-resiliency

Just as enterprises trust silicon-based zero-trust security over quantum computing, the same holds for self-healing embedded in an endpoint’s silicon. Forrester analyzed just how valuable self-healing in silicon is in its report, The Future of Endpoint Management. Forrester’s Andrew Hewitt, the report’s author, says that “self-healing will need to occur at multiple levels: 1) application; 2) operating system; and 3) firmware. Of these, self-healing embedded in the firmware will prove the most essential because it will ensure that all the software running on an endpoint, even agents that conduct self-healing at an OS level, can effectively run without disruption.” Forrester interviewed enterprises with standardized self-healing endpoints that rely on firmware-embedded logic to reconfigure themselves autonomously. Its study found that Absolute’s reliance on firmware-embedded persistence delivers a secured, undeletable digital tether to every PC-based endpoint. Organizations told Forrester that Absolute’s Resilience platform is noteworthy in providing real-time visibility and control of any device, on a network or not, along with detailed asset management data.


How enterprises can use ChatGPT and GPT-3

It is not possible to customize ChatGPT, since the language model on which it is based cannot be accessed. Though its creator company is called OpenAI, ChatGPT is not an open-source software application. However, OpenAI has made the GPT-3 model, as well as other large language models (LLMs) available. LLMs are machine learning applications that can perform a number of natural language processing tasks. “Because the underlying data is specific to the objectives, there is significantly more control over the process, possibly creating better results,” Gartner said. "Although this approach requires significant skills, data curation and funding, the emergence of a market for third-party, fit-for-purpose specialized models may make this option increasingly attractive." ... ChatGPT is based on a smaller text model, with a capacity of around 117 million parameters. GPT-3, which was trained on a massive 45TB of text data, is significantly larger, with a capacity of 175 billion parameters, Muhammad noted. ChatGPT is also not connected to the internet, and it can occasionally produce incorrect answers.


Flaws in industrial wireless IoT solutions can give attackers deep access into OT networks

While many of these flaws are still in the process of responsible disclosure, one that has already been patched impacts Sierra Wireless AirLink routers and is tracked CVE-2022-46649. This is a command injection vulnerability in the IP logging feature of ACEManager, the web-based management interface of the router, and is a variation of another flaw found by researchers from Talos in 2018 and tracked as CVE-2018-4061. It turns out that the filtering put in place by Sierra to address CVE-2018-4061 did not cover all exploit scenarios and researchers from Otorio were able to bypass it. In CVE-2018-4061, attackers could attach additional shell commands to the tcpdump command executed by the ACEManager iplogging.cgi script by using the -z flag. This flag is supported by the command-line tcpdump utility and is used to pass so-called postrotate commands. Sierra fixed it by enforcing a filter that removes any -z flag from the command passed to the iplogging script if it's followed by a space, tab, form feed or vertical tab after it, which would block, for example, "tcpdump -z reboot".


Are Your Development Practices Introducing API Security Risks?

APIs are a prime target for such attacks because cybercriminals can overload the API endpoint with unwanted traffic. Ultimately, the attacker’s goal is to use the API as a blueprint to find internal objects or database structures to exploit. For example, a vulnerable API endpoint backend that connects to a frontend service can expose end users to risk. One researcher even discovered a way to abuse automobiles’ APIs and telematics systems to execute various tasks remotely, such as to lock the vehicle. In the past, bot management technologies, like CAPTCHA, were developed to block bots’ access to web pages that were intended only for human users. However, that approach to security assumes that all automated traffic is malicious. As application environments have matured and multiplied, automation became essential for executing simple functions. Thus, it means organizations cannot rely on simplistic web application firewall rules that block all traffic from automated sources by default. Instead, they need to quickly identify and differentiate good and bad bot traffic.


Zero-shot learning and the foundations of generative AI

One application of few-shot learning techniques is in healthcare, where medical images with their diagnoses can be used to develop a classification model. “Different hospitals may diagnose conditions differently,” says Talby. “With one- or few-shot learning, algorithms can be prompted by the clinician, using no code, to achieve a certain outcome.” But don’t expect fully automated radiological diagnoses too soon. Talby says, “While the ability to automatically extract information is highly valuable, one-, few-, or even zero-shot learning will not replace medical professionals anytime soon.” Pandurang Kamat, CTO at Persistent, shares several other potential applications. “Zero-shot and few-shot learning techniques unlock opportunities in areas such as drug discovery, molecule discovery, zero-day exploits, case deflection for customer-support teams, and others where labeled training data may be hard.” Kamat also warns of current limitations. 


PWC highlights 11 ChatGPT and generative AI security trends to watch in 2023

“Many of the interesting business use cases emerge when you consider that you can further train (fine-tune) generative AI models with your own content, documentation and assets so it can operate on the unique capabilities of your business, in your context. In this way, a business can extend generative AI in the ways they work with their unique IP and knowledge. “This is where security and privacy become important. For a business, the ways you prompt generative AI to generate content should be private for your business. Fortunately, most generative AI platforms have considered this from the start and are designed to enable the security and privacy of prompts, outputs and fine-tuning content. ... “Using generative AI to innovate the audit has amazing possibilities! Sophisticated generative AI has the ability to create responses that take into account certain situations while being written in simple, easy-to-understand language.


What leaders get wrong about responsibility

One way of demonstrating responsibility is through the process of asking and answering questions. Many get at least one part of the process right: by responding to the questions received from their employees, leaders believe that they are showing themselves to be reliable and trustworthy. This isn’t too far off base. The word responsibility, after all, stems from the Latin respons, meaning respond or answer to. Unfortunately, by not asking questions themselves, leaders prevent employees from demonstrating the same kind of reliable and trustworthy behavior—and that makes it harder to embed the locally owned responsibility that they are looking for. ... When leaders use questions to assume responsibility themselves, they think, talk, and behave in a way that puts them at the center of attention (see the left side of the figure above). The questions they ask are quiz or test questions designed to confirm that the respondents see the world in the same way the leader does—e.g., “What are the components of a good marketing campaign?”


OT Network Security Myths Busted in a Pair of Hacks

In one set of findings, a research team from Forescout Technologies was able to bypass safety and functional guardrails in an OT network and move laterally across different network segments at the lowest levels of the network: the controller level (aka Purdue level 1), where PLCs live and run the physical operations of an industrial plant. The researchers used two newly disclosed Schneider Modicon M340 PLC vulnerabilities that they found — a remote code execution (RCE) flaw and an authentication bypass vulnerability — to breach the PLC and take the attack to the next level by pivoting from the PLC to its connected devices in order to manipulate them to perform nefarious physical operations. "We are trying to dispel the notion that you hear among asset owners and other parties that Level 1 devices and Level 1 networks are somehow different from regular Ethernet networks and Windows [machines] and that you cannot move through them in very similar ways," says Jos Wetzels



Quote for the day:

"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley

Daily Tech Digest - February 14, 2023

Will your incident response team fight or freeze when a cyberattack hits?

CISOs shouldn’t be surprised to hear that even well-prepared teams can have moments of paralysis; it’s just human nature, McKeown says. She says sometimes responders may experience cognitive narrowing, where they’re so focused on the situation directly in front of them that they can’t consider the full circumstances—an experience that can stop responders from thinking as they normally would. Niel Harper, an enterprise cybersecurity leader who serves as a board director with the governance association ISACA, witnessed a team freeze in response to a ransomware attack on his first day working with a company as an advisor. “They literally did not know what to do, even though they had some experience with [incident response] walkthroughs,” he recalls. “They were in panic mode.” Harper says he has seen other situations where the response was stymied and thus delayed. In some cases, teams were afraid that they’d be seen as overreacting. In others, they were paralyzed with the fear of being blamed.


What Kind of Glasses Are You Wearing? Your View of Risk May Be Your Biggest Risk of All

Consider an organization that is focusing on increasing revenue by expanding outbound sales in new territories in the European market. A compliance-focused organization might conduct an internal assessment of EU General Data Protection Regulation (GDPR) requirements, determine if there are current controls in place to meet them and report metrics indicating the organization is compliant. However, a risk-focused enterprise begins by assessing the unique threats within the region and determining the risk factors that could prevent the organization from conducting sales in Europe. Wearing risk-colored glasses empowers risk professionals to proactively monitor and communicate risk in a context their organization will understand. Viewing business outcomes from this perspective enables organizational leadership to prioritize investments and agree on a suitable level of protection.


Australian organisations underinvesting in cyber security

The underinvestment was more stark among small companies, of which 69% had not invested enough in cyber security, according to the study conducted by Netskope, a supplier of secure access service edge (SASE) services. Major data breaches over the past year, however, have cast the spotlight on cyber security, with over three-quarters (77%) of 300 respondents who participated in the study noting that their leadership’s awareness of cyber threats had increased. Some 70% also noted an increase in their leadership’s willingness to bolster investments – the proportion of organisations that are planning bigger cyber security budgets between 2022 and 2023 jumped to 63%, compared with 45% that saw increases between 2020 and 2022. This increase is most pronounced among larger organisations with over 200 employees, where over 80% are increasing cyber security budgets. Among small firms with fewer than 20 employees, 41% planned to spend more on cyber security between 2022 and 2023, up from just 23% between 2020 and 2022.


What Is Zero-Knowledge Encryption?

Zero-knowledge encryption is not a specific encryption protocol, but a process that focuses on preserving a user’s data privacy and security to the maximum extent. In order for a service to be truly zero-knowledge, a user’s data must be encrypted before it leaves the device, while it’s being transferred, and when it is stored on an external server. This is because modern encryption in general is incredibly effective at barring unauthorized participants from decoding encrypted data. It’s functionally impossible to crack modern-day encryption using brute-force approaches. However, for ease of use and UX benefits, many service providers also hold a user’s encryption key—introducing an additional point of failure that’s attractive for malicious actors because service providers often hold many user keys. There are a variety of benefits (and also detriments) when service providers share knowledge of an encryption key, but it also means that someone other than the user can decrypt the data—which makes it not zero-knowledge.


Google Touts Web-Based Machine Learning with TensorFlow.js

Why Do ML over the Web? First off, he mentioned privacy. One common use case is for processing sensor data in ML workloads — such as data from a webcam or microphone. Using TensorFlow.js, Mayes said, “none of that data goes to the cloud […] it all happens on-device, in-browser, in JavaScript.” For this reason, TensorFlow.js is being used by companies doing remote healthcare, he said. Another privacy use case is human-computer interaction. “With some of our models, we can do body pose estimation, or body segmentation, face keypoint estimation, all that kind of stuff,” Mayes said. Lower latency is another reason to do ML in the browser, according to Mayes. “Some of these models can run over 120 frames per second in the browser, on an NVIDIA 1070 let’s say,” he said. “So that’s kind of [an] old generation graphics card and [yet it’s] still pushing some decent performance there.” Cost was his third reason, “because you’re not having to hire and run expensive GPUs and CPUs in the cloud and keep them running 24/7 to provide a service.”


Solidifying Risk Management: How to Get Started With Continuous Monitoring

Continuous monitoring entails understanding not only the risks you’re facing now and those visible on the horizon, but also the risks beyond the horizon. This requires recognizing risk velocity, acknowledging risk volatility, and developing and deploying a mechanism by which you can periodically check in on, and be alerted to, key risks. The key is to think differently, and to use your 360° view of your organization to develop strategies that help you simultaneously plan and execute in coordination and ongoing communication with first- and second-line roles. ... KRIs are crucial for continuous monitoring, helping companies be more proactive in identifying potential impacts. KRIs are selected and designed by analyzing risk-related events that may affect the organization’s ability to achieve its objectives. Typically, by looking at risk events that have impacted the organization (in the past or currently), it’s possible to work backward to pinpoint the root-cause or intermediate events that led to them.


Using the blockchain to prevent data breaches

A primary reason for the increase in data breaches is over-reliance on centralized servers. Once consumers and app users enter their personal data, it’s directly written into the company’s database, and the user doesn’t get much say in what happens to it afterward. Even if users attempt to limit the data the company can share with third parties, there will be loopholes to exploit. As the Facebook–Cambridge Analytica data-mining scandal showed, the results of such centralization can be catastrophic. Additionally, even assuming goodwill, the company’s servers could still get hacked by cybercriminals. In contrast, blockchains are decentralized, immutable records of data. This decentralization eliminates the need for one trusted, centralized authority to verify data integrity. Instead, it allows users to share data in a trustless environment. Each member has access to their own data, a system known as zero-knowledge storage. This also makes the network less likely to fall victim to hackers. Unless they bring down the whole network simultaneously, the undamaged nodes will quickly detect the intrusion.


Companies serious about customer privacy in 2023 will start with data security

This should be viewed as an opportunity, rather than yet another compliance burden for boards to manage. In fact, cyber executives are increasingly viewing data privacy laws and regulations as an “effective tool for reducing cyber risks … despite the challenges associated with compliance”, according to the World Economic Forum. But to improve privacy protections, those same executives must begin by enhancing security. Why? Because you can have security without privacy, but never privacy without security. Privacy is the right for data subjects to control how their personal information is collected, stored and used. Fail to secure this data and others could access and use it unlawfully. In these terms, data security is an essential prerequisite for protecting customers’ privacy rights. It’s telling that the name for data privacy day in the EU is Data Protection Day. Without adequate “technical and organisational measures” as cited in Popia, true data privacy will always be out of reach.


Healthcare in the Crosshairs of North Korean Cyber Operations

In addition to obfuscating their involvement by operating with other affiliates and foreign third parties, North Korean actors frequently use fake domains, personas, and accounts to execute their campaigns, CISA and the others said. "DPRK cyber actors will also use virtual private networks (VPNs) and virtual private servers (VPSs) or third-country IP addresses to appear to be from innocuous locations instead of from DPRK." The advisory highlighted some of newer software vulnerabilities that state-backed groups in North Korea have been exploiting in their ransomware attacks. Among them were the Log4Shell vulnerability in the Apache Log4j framework (CVE-2021-44228) and multiple vulnerabilities in SonicWall appliances. CISA's recommended mitigations against the North Korean threat included stronger authentication and access control, implementing the principle of least privilege, employing encryption and data masking to protect data at rest, and securing protected health information during collection, storage, and processing.


Three ideal scenarios for anomaly detection with machine learning

When detecting anomalies, the typical way to go in many business areas was traditionally based on predetermined rules. For example, a fraud detection system could spot suspicious card payments which greatly exceeded a spending threshold. The main problem with this approach is its lack of flexibility, given that the set of rules must be continuously updated to cope with ever-evolving scenarios, such as anomalous activity due to a new type of malware. Here lies machine learning’s full potential. Any system fuelled with this technology can digest enormous datasets, autonomously identify recurring patterns and cause/effect relationships among the data analysed, and create models portraying these connections. In addition, when properly trained, such models will be capable of processing additional data to make predictions, further refining their skills through experience as they consume more and more information.



Quote for the day:

"Don't necessarily avoid sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld

Daily Tech Digest - February 13, 2023

Mergers and Acquisitions in Healthcare: The Security Risks

Incidents such as the CommonSpirit ransomware attack highlight the critical importance for entities to carefully assess and address potential IT security risks involving a potential merger or acquisition, experts say. "We are seeing that well-established health systems or entities that have very mature cybersecurity programs take on an entity which is less secure," says John Riggi, national adviser for cybersecurity and risk at the American Hospital Association. The association advises hospital mergers to treat cyber risk with the same priority as financial analysis in a merger. But determining and identifying the array of systems and myriad of devices used by another healthcare entity that's being acquired is not easy. "When you buy an organization, you typically don't know everything you're buying," says Kathy Hughes, CISO of New York-based Northwell Health, which has 21 hospitals and over 550 outpatient facilities, many of which were acquired by the organization, which is the result of a 1997 merger between North Shore Health System and Long Island Jewish Medical Center.


Forget ChatGPT vs Bard, The Real Battle is GPUs vs TPUs

Solving for efficient matrix multiplication can cut down on the amount of compute resources required for training and inferencing tasks. While other methods like quantisation and model shrinking have also proven to cut down on compute, they sacrifice on accuracy. For a tech giant creating a state-of-the-art model, they’d rather spend the $5 million, if there’s no way to cut costs.  ... NVIDIA’s GPUs were well-suited to matrix multiplication tasks due to their hardware architecture, as they were able to effectively parallelise across multiple CUDA cores. Training models on GPUs became the status quo for deep learning in 2012, and the industry has never looked back. Building on this, Google also launched the first version of the tensor processing unit (TPU) in 2016, which contains custom ASICs (application-specific integrated circuits) optimised for tensor calculations. In addition to this optimisation, TPUs also work extremely well with Google’s TensorFlow framework; the tool of choice for machine learning engineers at the company.


As Digital Trade Expands, Data Governance Fragments

The upshot is that we are still far from any more global efforts. Even preliminary convergence on national laws about data protection and privacy between the United States and the European Union is difficult to achieve. Instead, Aaronson advocated for the establishment of a new international organization that could provide proper incentives to, and pay, global firms to share data. Overall, the panellists urged that technical discussions of data flows, data governance and rules for digital trade be contextualized within fundamental concerns about the nature of data and the role of human rights. These concerns equally require attention and governance. The discussion on effective digital governance requires a fundamental rethink of the nature of data. As emphasized by panellist Kyung Sin Park, data embeds fundamental human freedoms and human information. It is closely linked to human rights. Data is much more than an economic asset used in training artificial intelligence (AI) algorithms.


Fall in Love with the Problem, Not the Solution: A Handbook for Entrepreneurs

Think of a problem—a big problem, something worth solving, something that would make the world a better place. Ask yourself, who has this problem? If you happen to be the only person on the planet with this problem, then go to a shrink. It’s much cheaper and easier than building a startup. But if a lot of people have this problem, go and speak with those people to understand their perception of the problem. Know the reality, and only then start building the solution. If you follow this path and your solution works, it’s guaranteed to create value. But there is a more important part to this. Imagine speaking with people and their feedback is, yeah, go ahead and solve that for me—this is a big problem. All of a sudden you feel committed to this journey. You essentially fall in love with the problem. Falling in love with the problem dramatically increases your likelihood of being successful because the problem becomes the north star of your journey, keeping you focused.


Data Mobility Framework: Expert Offers Four Keys to Know

It’s common for hybrid work teams to schedule when employees will be in the office and when they’ll work remotely. But while remote workers don’t always work from the same home office, they do expect similar access to business data and applications regardless of the network or device they’re using—and all of this remote connectivity has a material impact on data storage demands. Organizations try to balance data storage initiatives to address this without causing downtime to mission-critical applications and data. The faster organizations can add new storage or move data non-disruptively to another location, the better services they can deliver to end-users. Thankfully, the right data migration partner can perform these critical services non-disruptively in a matter of hours. This enables the organization and its partners to access a range of capabilities to minimize data migration efforts, including being able to migrate “hot data” to a new, more powerful array without downtime. Hot data is any data that is in constant demand, such as a database or application that’s essential for your business to operate.


Stop Suffocating Success! 7 Ways Established Businesses Can Start Thinking Like a Startup.

Startups aren't trapped by old rules—they're in the process of inventing themselves. Obviously, established companies can't just completely throw out the rulebook. But remember rules should exist to help, not just because they've always been there. Otherwise, people wind up blindly following often annoying processes without thinking about the end goal. For example, if multiple clients ask for a product feature that hasn't been included, but there isn't a feature review meeting until the next quarter, does it make sense to follow the rules and wait? Or should staff be empowered to add the feature (or, at least, fast-track a product review)? Beware of any policy that exists because "We've-always-done-things-this-way." ... Incompetent workers can take a terrible toll. To start, everything's harder when the people around you don't carry their weight. It's also demoralizing—you're working so hard and hitting all your goals, while the person next to you fails spectacularly and apparently isn't penalized for it. Over time, you're likely to grow bitter or just stop trying so hard since results clearly don't matter.


The Stubborn Immaturity of Edge Computing

Of course, they don’t even think of it as “the edge”. To them, it’s where real work takes place. So when IT vendors and cloud providers and carriers talk about the “far edge” (where real customers and real factories and real work takes place), that makes no sense to people outside of IT vendors’ data-center-centric bubble. The real world doesn’t revolve around the data center, or the cloud. What’s really far in the real world? The cloud. The data center. Edge computing is a technology style that’s part of a digital transformation trend. Digital transformation has been on a march for decades, well before we called it that. It’s accelerated because of cloud computing, and global connectivity. A lot of the technology transformation has been taking place at the back-end. In data centers, in business models. And there’s a lot left to be done. But the true green field in digital transformation is where people and things and factories actually exist. (OK, we’ll call that the “edge”, but that’s such an old IT-centric way of talking!)


How the Future of Work Will Be Shaped by No Code AI

No-code, like other breakthroughs, is a thrilling disruption and improvement in the software development process, particularly for small firms. Among its various applications, no-code has enabled users with little technical experience to create applications using pre-built frameworks and templates, which will undoubtedly lead to further inventions and design and development in the digital town square. It also cuts down on software development time, allowing for faster implementation of business solutions. Aside from the time saved, no-code can enhance computer and human resources by transferring these duties to software suppliers. ... No-code is also a game changer for many AI technology developers and non-technical people since it focuses on something we never imagined possible in the difficult field of artificial intelligence: simplicity. Anyone will be able to swiftly build AI apps using no-code development platforms, which provide a visual, code-free, and easy-to-use interface for deploying AI and machine learning models.


Code Readability vs Performance: Here is The Verdict

Code performance is critical, especially when working on projects that require high-speed computation and real-time processing. This can result in slow and sluggish user experiences. But focusing on the performance of a code that is not readable is useless. Moreover it can also be prone to bugs and errors. Performance is a quirky thing. Starting to write a code with performance as the first priority is not a path that any developer would take, or even recommend. In a Reddit thread, a developer gives an example of a code that compiles in 1 millisecond, and the other code in 0.1 millisecond. No one can really notice the difference between both the models as long as the code is “fast enough”. So improving the performance and focusing on it, while sacrificing the readability of the code can be counterproductive. Moreover, in the same Reddit thread, another developer pointed out that writing faster algorithms actually requires you to write harder code oftentimes, which again sacrifices the readability. 


LockBit Group Goes From Denial to Bargaining Over Royal Mail

LockBit's about-face - "it wasn't us" to "it was us" - is a reminder that ransomware groups will continue to lie, cheat and steal, so long as they can profit at a victim's expense. Isn't hitting a piece of Britain's critical national infrastructure - as in, the national postal service - risky? After DarkSide hit Colonial Pipeline in the United States in May 2021, for example, the group first blamed an affiliate before shutting down its operations and later rebooting under a different name. While hitting CNI might seem like playing with fire, many security experts' consensus is that ransomware groups' target selection remains opportunistic. Both operators and any affiliates who use their malware, as well as the initial access brokers from whom they often buy ready-made access to victims' networks, seem to snare whoever they can catch and then perhaps prioritize victims based on size and industry. What's notable isn't necessarily that LockBit - or one of its affiliates - hit Royal Mail, but that it decided to press the attack. 



Quote for the day:


“None of us can afford to play small anymore. The time to step up and lead is now.” -- Claudio Toyama