Daily Tech Digest - January 31, 2021

How retailers can manage data loss threats during remote work

While an e-commerce store often relies on many software tools to help make day-to-day operations a little easier, it's likely that the number of apps being used has gone up with the increase in remote work. However, separate software tools don't always play nice together, and the level of access and control they have over your data might surprise you. Some even have the ability to delete your data without warning. At least once a year, e-commerce merchants should audit all the applications connected to their online store. Terms and conditions can change so it's best you understand any changes in the last 365 days. List all the pros and cons of each integration and decide if any tradeoffs are worth it. SaaS doesn't save everything.  Software-as-a-service (SaaS) tools will always ensure the nuts and bolts of the platform work. However, protecting all the data stored inside a SaaS or cloud solution like BigCommerce or Shopify rests on the shoulders of users. If you don't fully back up all the content and information in your store, there's absolutely no guarantee it will be there the next time you log in. This model isn't limited to just e-commerce platforms. Accounting software like QuickBooks, productivity tools like Trello and even code repositories like GitHub all follow the same model.


Don't make these cyber resiliency mistakes

Manea begins by sharing the well-worn axiom that defenders must protect every possible opening where attackers only need one way in. If realistic, that truism alone should be enough to replace a prevention attitude with one based on resilience. Manea then suggests caution. "Make sure you understand your organizational constraints—be they technological, budgetary, or even political—and work to minimize risk with the resources that you're given. Think of it as a game of economic optimization." ... Put simply, a digital threat-risk assessment is required. Manea suggests that a team including representatives from the IT department, business units, and upper management work together to create a security-threat model of the organization—keeping in mind: What would an attacker want to achieve?; What is the easiest way for an attacker to achieve it?; and What are the risks, their severity, and their likelihood? An accurate threat model allows IT-department personnel to implement security measures where they are most needed and not waste resources. "Once you've identified your crown jewels and the path of least resistance, focus on adding obstacles to that path," he said.


Researchers have developed a deep-learning algorithm that can de-noise images

In conventional deep-learning-based image processing techniques, the number and network between layers decide how many pixels in the input image contribute to the value of a single pixel in the output image. This value is immutable after the deep-learning algorithm has been trained and is ready to de-noise new images. However, Ji says fixing the number for the input pixels, technically called the receptive field, limits the performance of the algorithm. “Imagine a piece of specimen having a repeating motif, like a honeycomb pattern. Most deep-learning algorithms only use local information to fill in the gaps in the image created by the noise,” Ji says. “But this is inefficient because the algorithm is, in essence, blind to the repeating pattern within the image since the receptive field is fixed. Instead, deep-learning algorithms need to have adaptive receptive fields that can capture the information in the overall image structure.” To overcome this hurdle, Ji and his students developed another deep-learning algorithm that can dynamically change the size of the receptive field. In other words, unlike earlier algorithms that can only aggregate information from a small number of pixels, their new algorithm, called global voxel transformer networks (GVTNets), can pool information from a larger area of the image if required.


Manufacturers Take the Initiative in Home IoT Security

Although ensuring basic connectivity between endpoint devices and the many virtual assistants they connect to would seem to be a basic necessity, many consumers have encountered issues getting their devices to work together effectively. While interoperability and security standards exist, there are none in place that provide consumers the assurance their smart home device will seamlessly and securely connect. To respond to consumer concerns, “Project Connected Home over IP” was launched in December 2019. Initiated by Amazon, Apple, Google and the Zigbee Alliance, this working group focuses on developing and promoting a standard for interoperability that emphasizes security. The project aims to enable communication across mobile apps, smart home devices and cloud services, defining a specific set of IP-based networking technologies for device certification. The goal is not only to improve compatibility but to ensure that all data is collected and managed safely. Dozens of smart home manufacturers, chip manufacturers and security experts are participating in the project. Since security is one of the key pillars of the group’s objectives, DigiCert was invited to provide security recommendations to help ensure devices are properly authenticated and communication is handled confidentially.


Has 5G made telecommunications sustainable again?

The state of the personal communications market as we enter 2021 bears undeniable similarity to that of the PC market (personal computer, if you've forgotten) in the 1980s. When the era of graphical computing began in earnest, the major players at that time (e.g., Microsoft, Apple, IBM, Commodore) tried to leverage the clout they had built up to that point among consumers, to help them make the transition away from 8-bit command lines and into graphical environments. Some of those key players tried to leverage more than just their market positions; they sought to apply technological advantages as well — in one very notable instance, even if it meant contriving that advantage artificially. Consumers are always smarter than marketing professionals presume they are. Two years ago, one carrier in particular (which shall remain nameless, in deference to folks who complain I tend to jump on AT&T's case) pulled the proverbial wool in a direction that was supposed to cover consumers' eyes. The "5G+" campaign divebombed, and as a result, there's no way any carrier can cosmetically alter the appearance of existing smartphones, to give their users the feeling of standing on the threshold of a new and forthcoming sea change.


Learn SAML: The Language You Don't Know You're Already Speaking

SAML streamlines the authentication process for signing into SAML-supported websites and applications, and it's the most popular underlying protocol for Web-based SSO. An organization has one login page and can configure any Web app, or service provider (SP), supporting SAML so its users only have to authenticate once to log into all its Web apps (more on this process later). The protocol has recently made headlines due to the "Golden SAML" attack vector, which was leveraged in the SolarWinds security incident. This technique enables the attacker to gain access to any service or asset that uses the SAML authentication standard. Its use in the wild underscores the importance of following best practices for privileged access management. A need for a standard like SAML emerged in the late 1990s with the proliferation of merchant websites, says Thomas Hardjono, CTO of Connection Science and Engineering at the Massachusetts Institute of Technology and chair of OASIS Security Services, where the SAML protocol was developed. Each merchant wanted to own the authentication of each customer, which led to the issue of people maintaining usernames and passwords for dozens of accounts.


Biometrics ethics group addresses public-private use of facial recognition

“To maintain public confidence, the BFEG recommends that oversight mechanisms should be put in place,” it said. “The BFEG suggests that an independent ethics group should be tasked to oversee individual deployments of biometric recognition technologies by the police and the use of biometric recognition technologies in public-private collaborations (P-PCs). “This independent ethics group would require that any proposed deployments and P-PCs are reviewed when they are established and monitored at regular intervals during their operation.” Other recommendations included that police should only be able to share data with “trustworthy private organisations”, specific members of which should also be thoroughly vetted; that data should only be shared with, or accessible to, the absolute minimum number of people; and that arrangements should be made for the safe and secure sharing and storage of biometric data. The BFEG’s note also made clear that any public-private collaborations must be able to demonstrate that they are necessary, and that the data sharing between the organisations is proportionate.


Security Threats to Machine Learning Systems

The collection of good and relevant data is a very important task. For the development of a real-world application, data is collected from various sources. This is where an attacker can insert fraudulent and inaccurate data, thus compromising the machine learning system. So, even before a model has been created, by inserting a very large chuck of fraudulent data the whole system can be compromised by the attacker, this is a stealthy channel attack. This is the reason why the data collectors should be very diligent while collecting the data for machine learning systems. ... Data poisoning directly affects two important aspects of data, data confidentiality, and data trustworthiness. Many a time the data used for training a system might contain confidential and sensitive information. By poisoning attack, the confidentiality of the data is lost. It is often believed that maintaining the confidentially of data is a challenging area of study by itself, the additional aspect of machine learning makes the task of securing the confidentiality of the data becomes that much more important. Another important aspect affected by data poisoning is data trustworthiness.


Fuzzing (fuzz testing) tutorial: What it is and how can it improve application security?

We know when a programmer is developing code, they have different computations depending upon what the user gives them. So here the program is the maze and then we have, let's just pretend, a little robot up here and input to the program is going to be directions for our robot through the maze. So for example, we can give the robot the directions, I'm going to write it up here, down, left, down, right. And he's going to take two rights, just meaning he's going to go to the right twice. And then he's going to go down a bunch of times. So you can think about giving our little robot this input and robot is going to take that as directions and he's going to take this path through the program. He's going to go down, left, down first right, second right, then a bunch of downs. And when you look at this, we had a little bug here. They can verify that this is actually okay. There's no actual bug here. And this is what's happening when a developer writes a unit test. So what they're doing is they're coming up with an input and they're making sure that it gets the right output. Now, a problem is, if you think about this maze, we've only checked one path through this maze and there's other potential lurking bugs out there.


The three steps for smart cities to unlock their full IoT potential

In theory, if a city applied uniform standards across all of its IoT-connected devices, it could achieve full interoperability. Nevertheless, we believe that cities and regulators should focus on defining common communication standards to support technical interoperability. The reason: Although different versions exist, communications standards are generally mature and widely used by IoT players. In contrast, the standards that apply to messaging and data formats—and are needed for syntactic interoperability—are less mature, and semantic standards remain in the early stages of development and are highly fragmented. Some messaging and data format standards are starting to gain broad acceptance, and it shouldn’t be long before policymakers can prudently adopt the leading ones. With that scenario in mind, planners should ignore semantic standards until clear favorites emerge. Building a platform that works across use cases can improve interoperability. The platform effectively acts as an orchestrator, translating interactions between devices so that they can share data and work. In a city context, a cross-vertical platform offers significant benefits over standardization.



Quote for the day:

"Education makes a people difficult to drive, but easy to lead; impossible to enslave, but easy to govern." -- Lorn Brougham

Daily Tech Digest - January 30, 2021

Internet of Cars: A driver-side primer on IoT implementation

There are millions of internet-connected cars already on the road, albeit mostly with crude subscription services for music and weather apps. With further advances, connection will be much more encompassing, with the average connected car having up to 200 sensors installed, each recording a point of data, minute by minute. The numbers quickly become staggering, and in emergency situations, the need for data agility is apparent. Picture driving on a highway in medium traffic. If someone’s tire blows out half a mile ahead, this information could be quickly conveyed to surrounding cars, warning of the potential for emergency braking. Any DLT solution would have to include a very nimble verification process for all these new packets of information to be brought into and carried by the network. Additionally, because of the computational complexity involved, almost all DLTs today charge a fee for each new transaction brought into the network. In fact, the fee is an integral part of the structure of many of these computational models. This is obviously not going to be workable in a system like urban traffic that would be generating billions of “transactions” every day. The truth is that decentralized data networks were never designed to handle these kinds of massive use-case scenarios.


AI vendors may have to prove systems don't discriminate

Providing proof that AI models are non-discriminatory means AI vendors would have to become much more transparent about how AI models were trained and developed, according to Purcell. "In the bill, it talks about the necessity of understanding what the training data was that went into creating the model," he said. "That's a big deal because today, a lot of AI vendors can just build a model kind of in secret or in the shadows and then put it on the market. Unless the model is being used for a highly regulated use case like credit determination or something like that, very few people ask questions." That could be easier for the biggest AI vendors, including Google and Microsoft, which have invested heavily in explainable AI for years. Purcell said that investment in transparency serves as a differentiator for them now. In general, bias in an AI system largely results from the data the system is trained on. The model itself "does not come with built-in discrimination, it comes as a blank canvas of sorts that learns from and with you," said Alan Pelz-Sharpe, founder and principal analyst at Deep Analysis. Yet, many vendors sell pre-trained models as a way to save their clients the time and know-how it normally takes to train a model. That's ordinarily uncontroversial if the model is used to, say, detect the difference between an invoice and a purchase order, Pelz-Sharpe continued.


Microsoft releases Application Guard for Office to M365 customers

Application Guard for Office isolates certain files opened in the suite's three primary applications: Word, Excel and PowerPoint. Documents obtained from untrusted Internet or intranet domains, files pulled from potentially unsafe areas, and attachments received through the Outlook email client, are opened in a virtualized environment, or sandbox, where malicious code can't wreak havoc. Unlike the much older Protected View, another Office defensive feature — it opens potentially dangerous documents as read-only — files opened in Application Guard can be manipulated. They can be printed, edited and saved. When saved, they remain in the isolation container and when reopened later, again are quarantined in the sandbox. Outdated file types — which can be set by administrators in the File Block feature within Word, Excel and PowerPoint — are also shunted into Application Guard's virtual machine. Application Guard for Office will be available to customers licensing Microsoft 365 E5 or Microsoft 365 E5 Security, and for now, only to those on either the Current Channel or Monthly Enterprise Channel. (Those are the Microsoft 365 update channels that deliver the most frequent refreshes.)


Digital nomads and "bleisure" define the new high-tech take on work trips

Many organizations have adopted remote work policies amid a modern plague. While some companies have brought telecommuters back to the traditional office, others have many long-term commitments to remote work. Ernest Lee, managing director development and investments, Americas, with citizenM hotels, similarly alluded to remote work-enabled "nomadic behavior" among professionals. The company recently announced a global passport; a subscription service allowing remote workers with a penchant for frequent traveling the ability to stay in any of the citizenM's 21 hotels around the globe. "We certainly think that this new sort of lifestyle will attract a certain type of person that wants to also blend in their personal interests and passions [with] not having to be tied down so much to a fixed location," Lee said. The company also offers a corporate subscription providing organizations with access to hotel rooms and meeting room spaces at a fixed price. Lee explained that this package is designed for remote teams who are no longer sharing "the same co-located space." To enhance the traditional business travel experience, hotels are incorporating a wide range of technologies, in-app features, Internet of Things (IoT) capabilities, and more.


'Clone Firm' Fraudsters Stealing Millions From UK Investors

A clone firm is a fake entity created by fraudsters that uses the name, address and Firm Reference Number - a unique identifier assigned to every financial or investment firm in the U.K and issued by the Financial Conduct Authority - of a legitimate organization, according to the alert. In some cases, the scammers will clone or spoof the entire website of a legitimate firm. Once these fake and spoofed websites are created, the fraudsters then send sales and marketing materials to would-be investors that appear to originate from legitimate firms. The scammers also advertise on social media, according to the alert. The fraudsters use phishing emails and social engineering techniques to lure victims, and their use of the legitimate sales materials gives the scheme a sheen of authenticity. Once a connection is established, the fraudsters attempt to get victims to send money to the cloned firm, the NCA notes. "Fraudsters use literature and websites that mirror those of legitimate firms, as well as encouraging investors to check the Firm Reference Number on the FCA Register to sound as convincing as possible," says Mark Steward, executive director of enforcement and market oversight for the Financial Conduct Authority.


DDoS Attacks Reach Over 10 Million in 2020

Richard Hummel, threat intelligence lead at NETSCOUT, said, “It is no coincidence that this milestone number of global attacks comes at a time when businesses have relied so heavily on online services to survive. Threat actors have focused their efforts on targeting crucial online platforms and services such as healthcare, education, financial services and e-commerce that we all rely on in our daily lives. As the COVID-19 pandemic continues to present challenges to businesses and societies around the world, it is imperative that defenders and security professionals remain vigilant to protect the critical infrastructure that connects and enables the modern world.” DDoS attack count, bandwidth, and throughput all saw significant increases since the start of the global COVID-19 pandemic. For instance, attack frequency rose 20% year over year, but that includes the pre-pandemic months of January, February, and most of March. For the second half of 2020, which was entirely pandemic-ridden, attacks rose 22% year over year. As cybercriminals quickly exploited pandemic-driven opportunities, we saw another kind of ‘new normal.’ Monthly DDoS attacks regularly exceeded 800,000 starting in March, as the pandemic lockdown took effect. 


IoT at the edge: magic won’t happen automatically

Creating more value at the edge Dheeraj Remella, Chief Product Officer at VoltDB, notes the uncertainty around many edge and IoT business cases. He argues, “Telcos spend a lot of time talking about moving up the value chain beyond connectivity, and this is a great opportunity. Differentiation is based on sets of complementary features, contributed by an ecosystem, that create capabilities rather than individual features, which as stand-alones are not compelling. The owner of the platform that delivers that joint capability holds the keys to the digital kingdom.” As Remella points out, decisioning at low-millisecond speed is one thing on a private network within an industrial plant, but another ball game when the edge is hugely distributed, such as a wind farm over hundreds or thousands of acres, or for smart agriculture or an electricity grid. He says that often, to cut down processing times at the edge, companies take what he calls a “hyper-contextualised” approach – automating decisions based on data about a single entity or an isolated set of events. This limits its usefulness, just making existing processes digital (digitising), rather than using advances in technology to do things we’ve never been able to do before (digitalising), which means doing thing differently – changing processes.


Sorry, Data Lakes Are Not “Legacy”

From a technical perspective, compute, and storage is intended to be loosely coupled architecture. As a result, this is a benefit for warehouses. However, the benefit is not just for warehouses. Any modern data architecture, by design, depends on a loosely coupled separation of compute and storage to deliver an efficient, scalable, and flexible solution. The fact that data warehouse vendors are introducing separate compute and storage is not innovation compared to data lakes; it is achieving parity with data lakes. The evolution of separate compute and storage in warehouses brings them in line with the architecture employed by productive data lakes via on-demand SQL query services. In a post called When to Adopt a Data Lake — and When Not to, a dig at data lakes was that they could not scale compute easily or on-demand; Some solutions architects have proposed data lakes to “separate compute from storage” in a traditional data warehouse. But they’re missing the point: You want the ability to scale compute easily and on-demand. A data lake isn’t going to give you this; what you need is a data warehouse that can provision and suspend capacity whenever you need it.


AI, machine learning effective in cyber defence, but can also present challenges

"Antivirus technology, for example, operates a strict ‘yes or no’ policy as to whether a file is potentially malicious or not. It’s not subjective, through a strict level of parameters, something is either considered a threat, or not." he says. "The AI can quickly determine whether it’s going to crash the device, lock the machine, take down the network and as such, it is either removed or allowed. "It is important to note that VIPRE uses AI and ML as key components in their email and endpoint security services for example as part of their email security attachment sandboxing solution where an email attachment is opened and tested by AI in an isolated environment away from a customer’s network," Paterson adds. "So while AI might not be an ideal method for preventing accidental data leakage through email, it does have an important part to play in specific areas such as virus detection, sandboxing and threat analysis." Paterson says with so much reliance on email within business practices, accidental data leakage is an inevitable risk. "The implications of reputational impact, compliance breach and associated financial damage can be devastating. A cyber-aware culture with continuous training is essential, and so is the right technology," he says.


Does CI/CD impact telecom operations?

In the standard microservice code model that underpins cloud-native software, every time a common code software component is improved, it will change all network systems that use that standard code. This approach can bring lightning-fast agility and innovation but leaves today's legacy bi-annual software test and validate processes entirely unfit for purpose. The telecom CI/CD philosophy means that software is developed, delivered, tested, accepted, and brought into operation incrementally at a far higher cadence than previously in a traditional service provider environment. Further, it creates a significant software development volume that needs validation on an increasingly dynamic network. This approach implies that continuous software validation and continuous testing must accompany continuous software delivery and deployment. These requirements demand a new agile way of working between the network operator, its software suppliers, and vendors. Essentially, the merging of Dev and Ops as in the IT world is now a must for the telecom context where the 'Dev' from vendors needs to seamlessly merge and receive feedback from the 'Ops' on the operator side of the firewall. This evolution requires a transformation on both the vendor side as well as the operator side.



Quote for the day:

"Entrepreneurship is the last refuge of the trouble making individual." -- Natalie Barney

Daily Tech Digest - January 29, 2021

Expert: Agile data-driven decision-making key to growth

"You can't achieve agility, and you can't be adaptive unless you empower your business users with as much self-service analytics and business intelligence and reporting as they can consume," Evelson said. "Self-service is really the only way to become agile and adaptive." That, however, is linked to data governance, which is also imperative to agile data-driven decision-making. "There is a very fine line between too much self-service and not enough governance, versus too much governance and not enough self-service," Evelson added. "Hopefully, there is a middle ground between the two, which we call Goldilocks data governance." All of the competencies together, meanwhile, enable an organization to be agile through what Evelson terms multi-modal analytics and reporting. They empower organizations to do descriptive analytics through dashboards and reports, diagnostic and predictive analytics to get insights, and ultimately prescriptive and actionable analytics to make decisions and trigger actions. And should organizations fail to become agile and adapt to constant change, they risk irrelevancy and ultimately insolvency. Forty years ago, the average lifespan of companies in the S&P 500 was about 30 years, Evelson said.


The Brain Is Neither a Neural Network nor a Computer

Autonomy is the idea that the brain is self-governing, receptive to the environment, but always in control. Somatic disorders ranging from improper sugar levels and hormone imbalances to diseases such as malaria or syphilis can cause mental dysfunction. Some individuals are placed in mental hospitals when correcting an underlying disorder would actually fix the problem. At the simplest level, no amount of mental determination would make you a world-class athlete if you did not have the right type of muscle fibers or hand-eye coordination. You cannot flap your arms and fly—the aerodynamics does not allow it. Paganini could only be the legendary violinist he was because of his flexibility. No amount of musicianship could provide that ability. Cognitive processes are embodied. They emerge from the interaction between physical organisms and their environment, not just their brains. For example, there is evidence that the nature of your gut bacteria can cause anxiety, stress, and even depression. Replacing a diseased organ with a healthy one can increase mental functioning. A kidney transplant will help remove poisons from the blood such as urea or ammonia which will increase brain health.


The state of corporate legal departments and the role of the Chief Legal Officer

The survey affirms we are in the “age of the CLO.” With 78 percent of respondents reporting to the CEO, the overall trend remains very positive. Further, while CLOs still spend around one quarter of their time providing legal advice, they also spend a significant amount of time on board matters and governance issues, contributing to strategy development, and advising other executives on non-legal issues. The survey found that 46 percent of CLOs are responsible for their company’s data privacy function, reflecting the growing integration of legal in business strategy and technology policy. In the order of functions reporting to the Chief Legal Officer, only compliance (74 percent) outranks privacy. CLOs are also increasingly engaging with environmental, social, and governance issues. This includes diversity and inclusion (D&I). A full 72.7 percent of CLOs expect diversity and inclusion specifically to accelerate in 2021. Encouragingly, even despite COVID-19, 32 percent of law departments plan to take on more lawyers in 2021, a slight increase over 30 percent from 2020.

Defense Against Vulnerabilities in the Cloud – Is It Too Late?

Apart from the traditional challenges around access management, data pilferage and threats from data communication with third party applications is gaining prominence. Communication with third party applications has found increased traction through APIs, which are increasingly being targeted by threat actors. Further, misconfigurations and policy violations in cloud assets create potential vulnerabilities and backdoors leading to risk of compromise. This is primarily due to the policies of some companies to not change the default security settings on their cloud workloads. These cloud vulnerabilities are accentuated by the increasing number of connected systems and their dependencies. The genesis of many vulnerabilities boil down to access and privilege management. Organizations need to plan for a deep inspection and vulnerability management system as part of their devsecops pipeline for building scalable cloud native applications. A comprehensive vulnerability management system goes a long way to enable organizations to effectively manage and minimizing their threat attack surface.


How to build a trustworthy and connected future

More broadly, big(ger) data from personal, commercial and government sources has the potential to address various challenges related to the Sustainable Development Goals. For instance, the Humanitarian and Resilience Investing Initiative aims to fill critical gaps in the available data that are preventing investors from accessing more humanitarian and resilience investing (HRI) opportunities. The pandemic has exposed and exacerbated existing gaps and inequalities, notably almost half of the global population remain offline and broadband services are too expensive for 50% of the population in developed countries. These “connectivity deserts” hamper access to health, education and economic inclusion. In a bid to improve access to the digital economy, during The Davos Agenda, the Forum launched the Essential Digital Infrastructure and Services Network, or EDISON Alliance, tasked with working to accelerate digital inclusion Meanwhile, in metropolises around the globe, which account for nearly two-thirds of CO2 emissions, smart energy infrastructure connected through data and digitalization is central to transitioning to “net zero” cities.


2020 Marked a Renaissance in DDoS Attacks

The sheer quantity of attacks in 2020 was surprising, Kaczmarek says. "We always expect the number of attacks to increase year over year and quarter over quarter, but we didn't expect that the quantity would increase by over 150%," he says. "This truly reflects the impact of the pandemic and the challenging precedent the 'new normal' has set for cybersecurity." The number of DDoS attacks that involved two or more vectors increased from 40% in 2019 to 72% in 2020, Kaczmarek added. "This means that the attackers as well as the tools they are using are improving," he says. According to Neustar, while the use of DDoS to try and extort ransoms is not new, these attacks grew in persistence, sophistication, and targeting in 2020. Cyber extortionists purporting to belong to well-known nation-state groups went after organizations in industries they have not regularly targeted previously, such as financial services, government, and telecommunications. "RDDoS attacks surged in Q4 2020 as groups claiming to be Fancy Bear, Cozy Bear, and the Lazarus Group attempted to extort organizations around the world," says Omer Yoachimik, product manager, DDoS protection at Cloudflare, another vendor that observed the same trend.


A better kind of cybersecurity strategy

The core of the matter involves deterrence and retaliation. In conventional warfare, deterrence usually consists of potential retaliatory military strikes against enemies. But in cybersecurity, this is more complicated. If identifying cyberattackers is difficult, then retaliating too quickly or too often, on the basis of limited information such as the location of certain IP addresses, can be counterproductive. Indeed, it can embolden other countries to launch their own attacks, by leading them to think they will not be blamed. “If one country becomes more aggressive, then the equilibrium response is that all countries are going to end up becoming more aggressive,” says Alexander Wolitzky, an MIT economist who specializes in game theory. “If after every cyberattack my first instinct is to retaliate against Russia and China, this gives North Korea and Iran impunity to engage in cyberattacks.” But Wolitzky and his colleagues do think there is a viable new approach, involving a more judicious and well-informed use of selective retaliation. “Imperfect attribution makes deterrence multilateral,” Wolitzky says. “You have to think about everybody’s incentives together. Focusing your attention on the most likely culprits could be a big mistake.”


US, China or Europe? Here's who is really winning the global race for AI

On almost all metrics, therefore, the EU seems to be taking a backseat; and according to the researchers, there is no doubt that this is due to stringent regulations that are in place within the bloc. "Many in Europe do not trust AI and see it as technology to be feared and constrained, rather than welcomed and promoted," concludes the report, recommending that the EU change its regulatory system to be "more innovation-friendly". The General Data Protection Regulation (GDPR), say the researchers, limits the collection and use of data that can foster developments in AI. Proposals for a Data Governance Act, while encouraging the re-use of public sector data, also restrains the transfer of some information; and by creating European data spaces, the regulation could inhibit global partnerships. Recent reports show that the last year has seen almost a 40% increase in GDPR fines issued by the EU compared to the previous 20 months, reaching a total of $332 million in fines since the new laws started applying. In that context, it is not rare to find that some firms are deterred from developing AI systems altogether, out of fear of receiving a fine – even for the most well-intentioned innovations.


A Guide to Find the Right IoT Module for Your Project

As more small and new module providers emerge into the IoT market, many cheaper IoT modules are becoming available to customers with extremely attractive tag price. If we simply look at the initial deployment cost of using cheaper modules, it might look like that it saves a lot of money for the customers. But is the quality of these modules guaranteed? The process of developing a new product and making it deliverable to the market takes long and is costly. Low-quality modules always accompany a higher risk of malfunction and, to the worst extent, result in the failure of the whole project. This will not help IoT companies to generate expected project income, in reverse, it causes a greater loss in investment. From a long-term perspective, even if the product was launched to the market, the unstable performance of the module is likely to cause unwanted surprises and require frequent maintenances. This will not be simply a higher operating cost to the business, it will also harm the reputation of the brand and damage the customers’ loyalty. For the long-term growth of the business, choosing a reliable partner and quality-guaranteed module products is wise and worthy.


Researchers: Beware of 10-Year-Old Linux Vulnerability

The vulnerability, called "Baron Samedit" by the researchers and officially tracked as CVE-2021-3156, is a heap-based buffer overflow in the Sudo utility, which is found in most Unix and Linux operating systems. Sudo is a utility included in open-source operating systems that enables users to run programs with the security privileges of another user, which would them give them administrative – or superuser - privileges. The bug, which appears to have been added into the Sudo source code in July 2011, was not detected until earlier this month, Qualys says. "Qualys security researchers have been able to independently verify the vulnerability and develop multiple variants of exploits and obtain full root privileges on Ubuntu 20.04 (Sudo 1.8.31), Debian 10 (Sudo 1.8.27), and Fedora 33 (Sudo 1.9.2). Other operating systems and distributions are also likely to be exploitable," the researchers say. After Qualys notified the authors of Sudo, a patch was included in version 1.5.5p2, published this week. Qualys and the Sudo authors are urging Linux and Unix users to immediately patch systems. Rob Joyce, who was recently named director of the National Security Agency's Cybersecurity Directorate, also flagged the alert on Twitter.



Quote for the day:

"Believe those who are seeking the truth. Doubt those who find it." -- Andre Gide

Daily Tech Digest - January 28, 2021

Engaging Employees to Accelerate Digital Banking Transformation

Many financial institutions are investing heavily in new technologies and processes to support their digital banking transformation goals. Research by the Digital Banking Report has found that banks and credit unions have increased investment in digital transformation in each of the past four years. There is no doubt that these investments are justified given the flight to digital by consumers and the game-changing technology that can support digital customer experience improvements. Unfortunately, with such a focus on data, analytics, technology and systems, most firms ignore the need to invest in employees to make sure they maximize the value of the new tools being deployed. Beyond open communication around how employees can be a part of the digital banking transformation process, it is important to invest in training the people to ensure that the digital banking transformation efforts succeed. If you don’t, it’s like buying a new car but failing to fill the gas tank (or charge the batteries). To respond to the need to reskill and upskill current employees, new models of managing learning and development have emerged. More than replicating legacy training methods, new learning officer positions have been created with the responsibility of not only creating ongoing learning opportunities, but also supporting cultural transformation.


Here’s why upskilling is crucial to drive the post-COVID recovery

We have a pressing societal problem: how to equip people with the skills they need to participate in the economy – now and in the future. As outlined in the World Economic Forum’s latest Future of Jobs Report, half of all employees around the world will need reskilling by 2025 – and that number doesn’t include all the people who are currently not in employment. If we don’t act now, this skills gap will only widen. With challenges come opportunities. Crisis events, like the pandemic, can and should shape economic thinking and represent a rare but narrow window of opportunity to reflect, reimagine, and reset priorities. So let’s seize this opportunity. We’re calling on governments, business leaders, and educators to join us in a global movement for upskilling. As you’ll see in our new report – Upskilling for Shared Prosperity – published as part of Davos Agenda Week to mark the first anniversary of the World Economic Forum’s Reskilling Revolution Platform, there’s a clear social and economic case for upskilling. If we commit to giving all people opportunities to build the skills they will need to fully participate in the future workplace, it will, in turn, lead to a prosperity dividend.


Law enforcement takes over Emotet, one of the biggest botnets

According to Europol, Emotet's infrastructure consisted of several hundred servers located across the world and serving different purposes, including making the botnet more resilient to takeover attempts. Law enforcement agencies had to work together to develop a strategy that involved gaining control of the infrastructure from the inside and redirecting victims to servers under their own control. As part of the investigation, the Dutch National Police seized data from the servers used by Emotet, including a list of stolen email credentials abused by the botnet. The agency set up a web page where users can check if their email address was among those affected. The information about infected computers that was gathered during the operation was also shared with national CERTs so the victims can be identified and contacted. "Only time will tell if the takedown will have long-term impact to Emotet operations," Jason Passwaters, COO of security firm Intel 471, tells CSO. "These groups are sophisticated and will have baked in some sort of recovery. Emotet itself does not appear to have any sort of inherent recovery mechanism, but a lot of the infected machines will have other malware installed as well, such as Qbot, Trickbot or something else. ..."


Top 5 Evolving Cybersecurity Threats to Cloud Computing in 2021

According to the Sophos Threat Report of 2020, misconfigurations can drive numerous data breaching incidents. Businesses are integrating themselves with cloud computing which guarantees the possibilities of cloud jacking emergence. Trend Marco predicts that code injection attacks can be utilized to attack cloud platforms. These attacks can be carried out through third-party libraries, from SQL injection and cross-site scripting. Attackers inject malicious code through third-party libraries and ensure that the code is downloaded and executed by individuals unintentionally. According to typical public cloud vendors, they are only responsible for the security of their infrastructure and individuals are responsible for protecting their data. ... Social engineering acquires phishing scams to steal user credentials for cloud-service tracks and on-premises attacks. Do you know that 78% of data breaching incidents that occurred during 2019 were related to phishing? This percentage increased in 2020. Innovative phishing attempts are launched through cloud applications rather than traditional emails. Phishing kits make it easier for cybercriminals to carry out illicit activities. Phishing kits require a very small amount of technical skills to carry out phishing operations.


What Is Robomorphic Computing?

A robot’s operation is a three-step process: gathering data using sensors or cameras; use mapping and localisation techniques to understand the environment; plotting the course of action. Advances in embedded vision and SLAM technology make data gathering and localisation easy. However, all these steps take a lot of time, especially when calculations are done on CPUs. Previously, the researchers have investigated the software side to develop an efficient algorithm to speed up robots. The MIT folks concluded it’s time to look beyond software. Hardware acceleration is the use of a specialised hardware unit to do certain computing tasks more efficiently. While Graphic Processing Units or GPUs have been availed for such tasks, the application is limited since the use cases are different for different robots. Hence, the researchers at MIT developed robomorphic computing to devise a customised hardware unit for individual robots. It considers the physical parameters of the robot and the tasks it needs to perform and translates it into mathematical matrices to design a specialised hardware architecture. The resulting chip design is unique to the robot and maximises its efficiency.


Digital Identity Is the New Security Control Plane

Digital identity — in the form of trusted contextual data defining who is accessing a system and how — provides this control plane. Users are already providing identity (and likely at multiple points). Systems are already consuming it — in the case of software-as-a-service (SaaS) environments, it may be one of the few configurable security controls available — but the decoupling of security from location and IP address is present in many other solutions. It can be tailored to an organization's needs and be risk-sensitive, with different methods and phases required, depending on the resource accessed. Even better, it's a control plane that can and should be implemented in a phased approach and provides a path to a zero-trust network architecture. The steps to building this are conceptually simple, and we can do extensive preparation. First, ensure even before you implement that the technologies you are investing in are identity-aware and able to make differentiated security decisions in the data plane based on that identity. This must extend to SaaS applications — one of the largest benefits of using identity as your control plane is the ability to bring these into the fold, as it were, and to match them to your security model. Second, consolidate identity to a single "source of trust" — that is, a single secure, consistent, and accurate repository for identity.

Data Privacy Day 2021: What to consider in the wake of Covid-19

The exit of the UK from the EU means that companies across the country that deal with Europe need to take extra steps to ensure correct compliance. According to Rich Vibert, CEO and co-founder of Metomic, this can be aided by considering this aspect at the start of any deployment. “This Data Privacy Day, we must confront the fact that UK companies aren’t equipped to protect their data now that we’ve Brexited,” said Vibert. “A large proportion of the responsibility for this lies with the UK government, whose failure to deliver guidance during the transition period resulted in businesses adopting a ‘wait and see’ approach. “Businesses need to take charge; proactively adapting compliance to UK-GDPR and analysing how a lack of adequacy could impact them and their customers. Only by doing so will they avoid the financial and reputational damage caused by non-compliance. “Regardless of whether the government holds the blame for the current status quo or not, leaders must see this as an opportunity to reset their approach to data protection. This means putting the privacy, compliance and security of data at the heart of their business strategy and using technology to facilitate this.


Marry IGA with ITSM to avoid the pitfalls of Identity 2.0

IAM solutions are too coarse-grained to handle such moves, in my experience. That forces admins to do IGA the hard way – taking care of onboarding, job changes, terminations, and so forth by hand. In addition to being a time- and labor-intensive hassle, manual IGA leads to numerous identity management errors. All too often, manual IGA grants access to new applications or information sources but doesn’t take away old ones, which exposes companies to security and compliance risks. Manual processes for managing patches, password resets, software updates, and more also increase risks. You don’t want an executive accessing highly confidential information from an app that doesn’t require two-factor authentication on a laptop that hasn’t been updated. But if IGA is managed from a spreadsheet, that’s exactly what happens. The employee lifecycle is only one of the IGA challenges that Identity 2.0 systems are not well-positioned to address. Take for example the expense and integration hassle of onboarding traditional IAM into manual IGA systems. The typical IGA system, like most enterprise systems, exists in a silo. Implementing manual IGA on systems such as HR, CRM, finance, and operations means writing numerous custom integrations.


What Happens If a Cloud Provider Shuts You Out?

There are other reasons, such as sudden outages or the shutdown of a cloud provider, for organizations to create plans to salvage their code and get back online quickly, Valentine says. Heikki Nousiainen, CTO at Aiven, also says the threat of getting cut off by all three major cloud providers is very low for most other businesses -- yet companies may want to maintain the ability to move code around for disaster recovery needs. “They are rare, but we sometimes see these big outages touch Google, AWS, or Azure in one or more regions,” he says. Companies with very time-sensitive online business needs, for example, may want to maintain the ability to roll over to a backup elsehwere, Nousiainen says. He recommends exploring true multi-cloud options where companies can select providers freely without being locked in, and also going with open source technology because that lets the same set of services run in different clouds. Some of these options can come at a bit of premium, though Nousiainen says the overall benefits may be worth it. “There are costs associated but typically when that investment goes into preparing infrastructure as a code it also helps for many other problems such as disaster recovery.”


Dead System Admin's Credentials Used for Ransomware Attack

In a case study published Tuesday, the researchers say the system administrator had died three months previously, but the account remained active. The researchers note that there are numerous reasons why the account could have been left open, including the possibility that the system admin had helped with the initial setup of the targeted firm's services. "Closing down the account would have stopped those services working, so keeping the account going was, we'd imagine, a convenient way of letting the dead person's work live on," according to the report. The Sophos report also notes that these types of "ghost" accounts are an increasing problem for security teams, especially if other parts of the company forget that they remain active after an employee has left or died. "In this case, the active use of the account of a recently deceased colleague ought to have raised suspicions immediately - except that the account was deliberately and knowingly kept going, making its abuse look perfectly normal and therefore unexceptionable, rather than making it seem weirdly paranormal and therefore raising an alarm," according to Sophos.



Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability." -- Med Jones

Daily Tech Digest - January 27, 2021

When Kubernetes is not the solution

Automation and orchestration are frequent reasons to leverage Kubernetes. Keep in mind that automation and orchestration often get confused, and for good reason. Automation can help make a business process more efficient by reducing or removing human involvement with software or hardware that performs specific tasks. For example, automation can launch a process to reorder raw materials automatically when other processes notice that supplies are below a specific level. In short, a single task is automated. Orchestration, in contrast, allows you to automate a workflow. Orchestration can keep track of sequence and activities, and can even invoke many single-task automations that are part of the workflow. Orchestration is a powerful Kubernetes tool that also allows you to invoke services such as database access across disparate systems. What's happening now is that many developers and architects choose Kubernetes to automate processes using the orchestration engine. That’s like hitting a thumbtack with a sledgehammer. You’ll end up spending way too many dollars on development and cloud resources to solve a simple, specific problem. Another fact that often gets overlooked is that Kubernetes is a complex system itself; it requires special expertise and at times can increase risk.


Learning from Incidents

When we use language that wraps up complexity in a neat parcel like “human error,” or we make counterfactual assertions (“system X should be able to detect this scenario,”) we give participants in our investigation the opportunity to agree with something that might be true given what we know in hindsight, but which does not help us understand the behaviour of the people or systems during the incident. Everyone in the room can nod and acknowledge that the human did indeed make a mistake, or that system “X” really should have detected the issue. Have you understood anything new about how your system really works? Unlikely. Secondly, when we ignore power structures and the social dynamics of the organizations we work in, we risk learning less. Asking “why” questions can put people on the defensive, which might make them less likely to speak frankly about their own experience. This is especially important when the person being asked is relatively less powerful in the organisation. “Why did you deploy that version of the code?” can be seen as accusatory. If the person being asked is already worried about how their actions will be judged, it can close down the conversation. “During this incident you deployed version XYZ. 

4 Ways Blockchain Could Catapult Into the Mainstream

We are used to storing valuables at home such as money, jewelry or art. However, when the value of these goods exceed what we can insure, or what we feel safe in keeping at home, we usually turn to banks or special custodians as more convenient safeguards for storing our liquid assets. Cryptocurrency offers alternative storage options via personal wallets or easy on-ramps to exchanges or a new category of crypto custodians that possess their own secure vaults. Today, many self-custody wallets already exist, allowing users to experience the self-service option for assets storage. Those same wallets also enable the storage of another blockchain novelty: “digitally unique” artifacts also known as non-fungible tokens (or NFTs; think CryptoKitties). In the long term, banks and old-style physical storage services may not be the most popular or safest storage methods anymore. Being your own custodian is an attractive value proposition that comes with a degree of freedom and efficiency, as long as its relative ease of use and trust levels continue to improve. Many users will gradually de-bank their assets and move them into self-custody to take advantage of new services that are only available in the blockchain world. 


Security's Inevitable Shift to the Edge

Many security architects are initially attracted to the SASE model as it helps them apply security controls at the optimal location in their rapidly changing architecture. That optimal location is the edge of the Internet, which will be close to any infrastructure-as-a-service (IaaS) or co-location facility that the business uses today or in the future. The edge deployment model provides agility for hybrid multicloud organizations and is well suited to changes to IaaS vendor or new locations from mergers and acquisitions. The flexibility of deploying security inspection at the edge means that, regardless of shifts in the location of compute, security inspection can be performed at a local edge node. This provides for optimized routing of traffic and avoids what Gartner describes as the unnecessary "tromboning of traffic to inspection engines entombed in enterprise data centers." Furthermore, since multicloud is the predominant architecture, deploying security at a homogeneous edge makes more sense than trying to engineer consistent controls using heterogeneous capabilities available at various cloud security providers (CSPs). Another driver for SASE is the migration of users outside of the traditional corporate offices.


Cisco bolsters edge networking family with expanded SD-WAN, security options

Among the four new models is a low-end box – the Cisco Catalyst 8500L – that's aimed at entry-level 1G/10G aggregation use cases, Cisco stated. The 1RU form factor 8500L is powered by 12 x86 cores and features up to 64GB memory to support secure connectivity for thousands of remote sites and millions of stateful NAT and firewall sessions, wrote Archana Khetan, senior director of product management for Enterprise Routing and SD-WAN Infrastructure at Cisco, in a blog about the new boxes. Businesses find that establishing aggregation sites at either core locations or colocations helps them own the first mile on their branch and remote-worker connectivity to the internet and other software-defined cloud interconnects, Khetan stated. "The Catalyst 8500L provides ultra-fast IPsec crypto performance and advanced flow-based forwarding to keep up with the demands of today's high-speed, secure connectivity," Khetan stated. Targeting the branch, Cisco added the Catalyst 8200, which supports eight CPU cores for high-performance packet forwarding and 8GB of default RAM to run the latest security services, Khetan stated. The Catalyst 8200 Series supports up to 1Gbps of aggregate forwarding throughput, which is double the performance of its ISR 4300 predecessor, according to Khetan.


Ransomware: Should Governments Hack Cybercrime Cartels?

One proposal has been to ban all ransom payments. Whether such bans could be enforced is not clear. Also, organizations that did their best to safeguard themselves, but still saw their systems get crypto-locked, could go out of business or suffer devastating interruptions due to a ban. Short of a ban, Ciaran Martin, an Oxford University professor of practice in the management of public organizations who until last August served as the British government's cybersecurity chief, says governments should at least crack down on insurers being able to help victims funnel payoffs to attackers. "I see this as so avoidable. At the moment, companies have incentives to pay ransoms to make sure this all goes away," Martin tells The Guardian, expanding on suggestions he's previously made. "You have to look seriously [at] changing the law on insurance and banning these payments, or at the very least, having a major consultation with the industry." Responding to suggestions that ransom payments be banned, a spokesman for the Association of British Insurers tells Information Security Media Group: "Insurance is not an alternative to managing the cyber ransomware risk; it is part of a toolkit to combat this crime." The spokesman also notes that policyholders must have all "reasonable precautions" in place.


Experts predict hot enterprise architecture trends for 2021

There is increasing competition in enterprise architecture tools, with a lot of new players. There's going to be more investing in R&D. Hopefully, that means customers will get better tools for their EA initiatives. We'll see tools going in different directions and having different focuses. The newer generation of tools is typically data-driven. You don't draw your architecture. It is basically derived from the data you put into the tools. That opens up different uses for data analytics to create future-state scenarios, quantify the benefits to the business and use that to make strategic decisions. You can do organizational modeling. It's difficult to do that unless you have a data-driven approach, because you would have to create every single future-state scenario. The entire delivery vehicle for the newer tools is cloud only, so you can deploy more rapidly. Companies that have moved to the cloud over the last couple of years realize that you can't be in one cloud anymore. You have to be in multiple clouds in order to ensure redundancy. That's another area where EA tools are focusing, creating native integration with these modern-day cloud environments and using enterprise architecture practices to manage and model them.

Streamlining cloud compliance through automation

The first is inherent in compliance with any cybersecurity and privacy requirement, and the cloud doesn’t make it go away (in fact, it arguably makes it worse) – and that’s the time it takes to audit. Companies preparing for audits must sink significant time and effort (hundreds of hours, every audit, across multiple requirements) into collecting a vast amount of technical data on information security controls and processes. Manually collecting data, taking screenshots, and organizing evidence takes that time away from cloud and DevOps teams that could otherwise be spent building new products or services. ... Second, security capabilities meant for on-premises environments no longer apply when companies begin migrating to the cloud, making evidence gathering all the more complicated. Quite simply, the cloud creates a new paradigm, forcing companies to re-architect the best security practices they have spent years perfecting, i.e., to fundamentally start from scratch. Third, software development and change management in the cloud moves at light speed compared to more traditional monolithic application updates, and it can be difficult for companies to keep up with the security and privacy implications of that ever-changing cloud environment.


How to deliver an effective technology strategy in 2021

Technology strategies, like data strategies and digital transformations can no longer be considered in isolation. Having the right technology platform is just one of a number of critical enablers to being competitive, agile and innovative in the 2020s. The growing trend for business transformation is a holistic approach which recognises to succeed, technology, data and digital transformations need to be tackled together, or at least in parallel. In the 2020s businesses can be divided between those who are disrupting and those being disrupted. Disruptors enter categories with a transformative new product, service or customer experience — posing an existential threat to the existing players. Disruptors are digital, data and technology first companies, leveraging these as assets in the battleground of customer experience. Any technology strategy should be intertwined with a data strategy. It should be focused on delivering the customer approach to serve the overall business plan. I appreciate that sounds a lot harder than focusing just on technology, but the alignment needs to be embraced rather than avoided if the desired outcomes are to be achieved. The world is littered with technology that’s easy to buy, more challenging to implement and often only partially or completely unused.


Cybersecurity, Modernization Top Priorities for Federal CIOs

One significant focus not covered by the first 100 day plan but indicated in the proposed stimulus package is a response to something more recent -- the SolarWinds hack, which has impacted both government and commercial IT organizations. In response the new administration is putting a new focus on cybersecurity, adding provisions that cover this area to the COVID-19 stimulus package. While it needs to go through Congress, the American Rescue Plan from the administration calls for a total of more than $10 billion for cybersecurity and IT modernization efforts, plus some other IT-related areas. "In addition to the COVID-19 crisis, we also face a crisis when it comes to the nation's cybersecurity," a brief of the plan says. "The recent cybersecurity breaches of federal government data systems underscore the importance and urgency of strengthening US cybersecurity capabilities. President-elect Biden is calling on Congress to launch the most ambitious effort ever to modernize and secure federal IT and networks." Even if it doesn't remain in the stimulus package that Congress ultimately passes, the Biden administration's inclusion of funding for cybersecurity highlights just what a priority this area is for the administration going forward.



Quote for the day:

"If we were a bit more tolerant of each other's weaknesses we'd be less alone." -- Juliette Binoche

Daily Tech Digest - January 25, 2021

DDoS Attackers Revive Old Campaigns to Extort Ransom

Radware's researchers say the tactics recently observed with the attacks launched by this particular group indicate a fundamental change in how it operates. Previously, the operators would target a company or industry for a few weeks and then move on. The 2020-2021 global ransom DDoS campaign represents a strategic shift from these tactics. DDoS extortion has now become an integral part of the threat landscape for organizations across nearly every industry since the middle of 2020," the report states. The other major change spotted is this threat group is no longer shy about returning to targets that initially ignored their attack or threat, with Radware saying companies that were targeted last year could expect another letter and attack in the coming months. "We asked for 10 bitcoin to be paid at (bitcoin address) to avoid getting your whole network DDoSed. It's a long time overdue and we did not receive payment. Why? What is wrong? Do you think you can mitigate our attacks? Do you think that it was a prank or that we will just give up? In any case, you are wrong," the second letter says, according to Radware. "The perseverance, size and duration of the attack makes us believe that this group has either been successful in receiving payments or they have extensive financial resources to continue their attacks," the report states.


Five Reasons You Shouldn't Reproduce Issues in Remote Environments

When attempting to reproduce an issue across multiple environments, one area that teams must have solid processes around is test data management. Test data can be critical in the reproduction of bugs in that if you don’t have the right test data in your environment, the bug may not be reproducible. Due to the sheer size of production data sets, teams must often work with subsets of that data across test environments. The holy grail of test data management processes is to allow teams to easily quickly subset production data based on the data needed to reproduce an issue. In practice, things don’t always work out so easily. It’s hard to know what attributes of your test data may be influencing a specific bug. In addition, data security when dealing with PII data can be a major challenge when subsets of data are used across environments. Teams need to ensure that they are in compliance with corporate data privacy standards by masking or generating new relevant data sets. Many times it takes lots of logging and hands on investigation to uncover how data discrepancies can cause those hard to find bugs. If you cannot easily manage and set up test data on demand, teams will suffer the consequences when it comes to trying to reproduce bugs in remote environments.


AI ethics: Learn the basics in this free online course

If you are interested, an excellent place to start might be the free online course The Ethics of AI, offered by the University of Helsinki in partnership with "public sector authorities" in Finland, the Netherlands, and the UK. Anna-Mari Rusanen, a university lecturer in cognitive science at the University of Helsinki and course coordinator, explains why the group developed the course: "In recent years, algorithms have profoundly impacted societies, businesses, and us as individuals. This raises ethical and legal concerns. Although there is a consensus on the importance of ethical evaluation, it is often the case that people do not know what the ethical aspects are, or what questions to ask." Rusanen continues, "These questions include how our data is used, who is responsible for decisions made by computers, and whether, say, facial recognition systems are used in a way that acknowledges human rights. In a broader sense, it's also about how we wish to utilize advancing technical solutions." The course, according to Rusanen, provides basic concepts and cognitive tools for people interested in learning more about the societal and ethical aspects of AI. "Given the interdisciplinary background of the team, we were able to handle many of the topics in a multidisciplinary way," explains Rusanen.


Zero trust: A solution to many cybersecurity problems

CISOs of organizations that have been hit by the attackers are now mulling over how to make sure that they’ve eradicated the attackers’ presence from their networks, and those with very little risk tolerance may decide to “burn down” their network and rebuild it. Whichever decision they end up making, Touhill believes that implementing a zero trust security model across their enterprise is essential to better protect their data, their reputation, and their mission against all types of attackers. And, though a good start, this should be followed by the implementation of the best modern security technologies, such as software defined perimeter (SDP), single packet authorization (SPA), microsegmentation, DMARC (for email), identity and access management (IDAM), and others. SDP, for example, is an effective, efficient, and secure technology for secure remote access, which became one of the top challenges organizations have been faced with due to the COVID-19 pandemic and the massive pivot from the traditional office environment to a work-from-anywhere environment. Virtual private network (VPN) technology, which was the initial go-to tech for secure remote access for many organizations, is over twenty years old and, from a security standpoint, very brittle, he says.


Comparing Different AI Approaches to Email Security

Supervised machine learning involves harnessing an extremely large data set with thousands or millions of emails. Once these emails have come through, an AI is trained to look for common patterns in malicious emails. The system then updates its models, rules set, and blacklists based on that data. This method certainly represents an improvement to traditional rules and signatures, but it does not escape the fact that it is still reactive and unable to stop new attack infrastructure or new types of email attacks. It is simply automating that flawed, traditional approach – only, instead of having a human update the rules and signatures, a machine is updating them instead. Relying on this approach alone has one basic but critical flaw: It does not enable you to stop new types of attacks it has never seen before. It accepts there has to be a "patient zero" – or first victim – in order to succeed. The industry is beginning to acknowledge the challenges with this approach, and huge amounts of resources – both automated systems and security researchers – are being thrown into minimizing its limitations. This includes leveraging a technique called "data augmentation," which involves taking a malicious email that slipped through and generating many "training samples" using open source text augmentation libraries to create "similar" emails.


Why Is Agile Methods Literacy Key To AI Competency Enablement?

First, quality AI is a highly iterative experimentation, design, build and review process. Organizations that are aspiring to build strong AI and data sciences competency centers will flounder if their core cultures are not building agile skills into all operating functions, from top to bottom. Given the incredible speed and uncertainties of everything becoming more digital and smarter, the imperative for all talent to continually adapt, reflect, and make decisions based on new information is a business imperative. Leaders do not have the luxury to procrastinate too long before acting on the new insights, and making decisions with confidence. Some times, cultures can build a capacity for inaction versus action oriented behavior. Agile leadership demands rapid precision, involving diverse stakeholders, which in turn, yields more positive change dynamics (momentum) and more importantly innovation capacity grows as a result of this energy force. In a recent Harvard article, the authors pointed out that, “If people lack the right mindset to change and the current organizational practices are flawed, digital transformation will simply magnify those flaws.” Truly agile organizations are able to capitalize on new information and make the next move because they have what we call the capacity to act.


10 ways to prep for (and ace) a security job interview

Hiring managers typically look for strong technical skills and specific cybersecurity experience in the candidates they want to interview, particularly for candidates filling entry- and mid-level positions within enterprise security. But managers use interviews to determine how well candidates can apply those skills and, more specifically, whether candidates can apply those skills to support the broader objectives of the organization, says Sounil Yu, CISO-in-resident at YL Ventures. As such, Yu says he and others look for “T-shaped individuals”—those with deep expertise in one area but with general knowledge across the broader areas of business. The candidates who get job offers are those who have, and demonstrate, both. “Security is a multidisciplinary problem, so that depth is an important asset,” Yu adds. Candidates love to say they’re passionate about security, but many can’t figure out how to showcase it. Those who can, however, stand out. Yu once interviewed a candidate via video and could see a server rack in the background of this person’s home office. “He clearly liked tinkering outside of work. You could see that he had tech skills and a passion for them and a drive to learn about new technologies,” Yu says. 


The changing role of IT & security – navigating remote work cybersecurity threats

The move to remote working and the complication of multiple devices and locations is also raising the important questions related to software licensing. Are you licensed for the apps that people are using at home, or are you licensed on their computer in the office and on their computer at home? Several businesses are now having to buy thousands of additional software licenses so that employees can work on more than one computer, at a time when cost optimisation is extremely important. One of the related threats to businesses is running afoul of regulatory data privacy protections like GDPR and CCPA, among others. Given the current state of things, it is unlikely that a regulator would currently be hunting for companies that might be improperly managing employee and customer data. It appears regulators are largely being more lenient at this stage while companies are busy just trying to survive. Whilst it is reasonably to consider that, for a time, this will continue, there will come a time when we see a return to enforcement and, in the meantime, there is no guarantee that regulators will not review issues that come up as a result of a data breach or loss. It’s always important to reinforce the best security practices to your workforce, but it is especially important when your employees are out of their normal routines.


Weighing Doubts of Transformation in the Face of the Future

You don’t have to [change], but you will be left behind. Seventy-four percent of CEOs believe that their talent force and organization need to be a digitally transformed organization, yet they feel like only 17% of their talent is capable and ready to do that. That gap is glaring. That’s coming from the tops of organizations and businesses. The first mover advantage has kind of passed already. Now we’re getting into the phase of cloud migration and the concept of everything-as-a-service. Digital transformation is easier to attain. You don’t have to be the first mover or early adopter. The companies that help you live, work, and play inside your home were pretty resilient during the COVID-19 pandemic. Tech, media, and fitness companies like NordicTrack and Peloton that helped you stay inside your house, they were the ones that needed to transform digitally immediately to deal with the significant increase in demand along with significant supply chain challenges. Now we are seeing other industries that saw a bit of a pause during COVID -- consumer, travel, entertainment, energy -- those businesses are seeing or expecting this uptick in the summer travel period, the pent-up demand of Americans. Interest rates are very low, and they haven’t been able to spend [as much] money for the last 12 to 18 months by the time the summer comes around.


Good News: Cryptocurrency-Enabled Crime Took a Dive in 2020

While the total cryptocurrency funds received by illicit entities declined in 2020, Chainalysis reports, criminals continue to love cryptocurrency - with bitcoin still dominating - because using pseudonymizing digital currencies gives them a way to easily receive funds from victims. Cryptocurrency also supports darknet market transactions, with many markets offering escrow services to help protect buyers and sellers against fraud. Using cryptocurrency, criminals can access a variety of products and services, such as copies of malware or hacking tools, complete sets of credit card details known as fullz, and tumbling or mixing services, which are provided by a third-party service or technology that attempts to mix bitcoins by routing them between numerous addresses, as a way of laundering the bitcoins. Criminals have also been using a legitimate concept called "coinjoin," which is sometimes built into cryptocurrency wallets as a feature. It allows users to mix virtual coins together while paying for separate transactions, which can complicate attempts to trace any individual transactions. Intelligence and law enforcement agencies have some closely held ability to correlate the cashing out of cryptocurrency with deposits that get made into individuals' bank accounts.



Quote for the day:

"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley