Daily Tech Digest - February 01, 2021

Welcome to the client-serverless revolution

As this trend intensifies, a new paradigm of connected internet applications has come to the forefront. This approach is known as client-serverless computing. It delivers consistently dynamic, interactive application experiences from any smartphone or edge device, no matter where a user happens to be, or where the resources they’re accessing are being served from. The widespread adoption of rich-client devices and the global availability of distributed cloud services has fueled the client-serverless computing trend even more, but it also demands more from developers. No longer can developers assume that their program code will primarily access databases, app servers, and web servers that are located within a single data center or cloud region. Instead, developers must build server-side business logic and markup, as well as the client-side JavaScript that will render the user interface on myriad client devices. They must code applications that are optimized for high-quality, browser-side interactivity over industry standard interfaces such as REST (for remote APIs) or JSON (for data formats). Client-serverless has roots in the old-guard, three-tier application architectures that sprung up around PCs and local area networks that connected a client-side GUI to a back-end SQL database.


Strengthening Zero-Trust Architecture

First, it's helpful to consider zero trust in terms of the need for controlled access management that does not negatively affect the business. Specifically, organizations must establish a zero-trust environment that limits access to individuals with the proper authority but doesn't interfere with daily operations. One way to accomplish this is through a data-trust lens. Rather than granting blanket access to validated users, organizations should hide specific files and data from those who don't have the authorization to access them, strengthening data protection beyond user-level permissions without impacting authorized users. By hiding objects like files, folders, or mapped network and cloud shares, attackers cannot find or access the data they seek. This function can serve as a powerful defense against data theft and ransomware attacks. Application trust likewise takes security beyond user privileges. Merely focusing on whether a query is authorized isn't enough — it's also vital to consider the application invoking that query. Doing so can prevent unauthorized access from applications such as Windows command line or PowerShell, which regular users wouldn't typically use to access data. Application trust can also help identify and deflect attackers attempting to probe open ports and services to compromise.


How can tech leaders take people with them on their digital transformation journey?

Leaders need to make it personal for their employees, make it clear that by introducing this new digital tool their life will become easier and their productivity more efficient. Leaders can look to do this by winning hearts and minds through demonstrations and simple, clear communication. If, for example, a business is introducing a new collaborative tool they need to make it clear how that will benefit employees. Will it reduce email traffic? Make instant communication more effective? Or free up more time in their day to focus on other priorities? Demonstrating these benefits will help to put people in the right mind-set from the start. It’s also important to ask for instant feedback on transformational change programmes. Ensuring people are involved from the start will promote engagement throughout the process and help leaders to understand how their employees feel about the change and impacts within their teams. Identify champions AND advocates Digital change champions are nothing new but are critical to support the roll out of digital transformation at the frontline of a business. These people can answer frequently asked questions, provide an additional avenue of communication to leaders and encourage employees to make best use of the new tools being made available to them.


AI No Silver Bullet for Cloud Security, but Here’s How It Can Help

One of the most promising – and certainly most developed – uses of AI in cybersecurity is to use AI systems to trawl through historical data in order to identify attack patterns. Some AI algorithms are very effective at this task, and can inform otherwise oblivious cybersecurity teams that they have, in fact, been hacked many times. The primary value of this kind of system is seen when it comes to managing employee access to systems and files. AI systems are extremely good at tracking what individual users are doing and at comparing this with what they do typically. This allows administrators (or automated security systems, explored below) to easily identify unusual activity and block users’ access to files or systems before any real damage is done. This kind of functionality is now widespread in many industries. Some cloud providers even ship it with their basic cloud storage systems. In many cases, in fact, an organization is not even aware that an AI is collecting data on the way they use their cloud service in order to scan this for unusual activity. This type of tool, however, also represents the limit of what AI can do, in terms of cloud security, at the moment. Most organizations lack the tools to use AI systems in a more complex way than this.


How do I select a PAM solution for my business?

Before choosing a PAM solution for their business, the first question a CISO should ask themselves is what it is that they aim to protect? Adopting PAM is as much about mindset and approach as it is about technology. Thousands of PAM programme engagements with the world’s largest organizations have cemented our view that the best way to protect the business is first to identify critical data and assets, then assess the paths that an attacker might take to compromise them. This sounds obvious but it is not yet the common practise that it should be. Privileges identities, credentials, secrets and accounts are found throughout IT infrastructure, whether this be on-premises, multi-cloud or a mix thereof. The ones that allow access to your critical data and assets are what the initial focus should be on. Once these are determined, there are a number of essential features that apply: Ease of implementation, ease of use, and ease of integration. The latter is essential. Look for integrations with your existing vendor stack; Cloud readiness is key. You are likely going to be moving applications into the cloud. Their privileged access needs to be secured; Session management and recording; Credential management for humans, applications, servers and machines; Audit and reporting features; and Privileged threat alerting.


Reported Data Breaches Rise 5% in Australia

The Office of the Australian Information Commissioner received 539 notifications between July and December, up from 512 in the first half of the year, according to its new report. Healthcare providers reported 133 breaches, followed by finance at 80; education, 40; legal, accounting and management services at 33; and the federal government at 33. This marked the first time the Australian government entered the top five list of sectors reporting the most breaches, displacing the insurance industry. The federal government’s breach tally does not include intelligence agencies or state and local government agencies, public hospitals and public schools. Under Australia’s notifiable data breaches law, organizations covered by the Privacy Act 1988 are required to report within 30 days breaches that are likely to result in “serious harm.” Fines for noncompliance can range up to 2.1 million Australian dollars ($1.6 million). The breach notification law went into effect in 2018 (see: Australia Enacts Mandatory Breach Notification Law). Although breach notifications increased by 5%, the OAIC characterized that as a “modest” increase given the rising cybersecurity risks introduced by the rapid shift in early 2020 to working from home due to the COVID-19 pandemic.


‘Weird new things are happening in software,’ says Stanford AI professor Chris Re

To handle the subtleties of which he spoke, Software 2.0, Re suggested, is laying out a path to turn AI into an engineering discipline, as he put it, one where there is a new systems approach, different from how software systems were built before, and an attention to new "failure modes" of AI, different from how software traditionally fails. It is a discipline, ultimately, he said, where engineers spend their time on more valuable things than tweaking hyper-parameters. Re's practical example was a system he built while he was at Apple, called Overton. Overton allows one to specify forms of data records and the tasks to be performed on them, such as search, at a high level, in a declarative fashion. Overton, as Re described it, is kind of an end-to-end workflow for deep learning. It preps data, it picks a model of neural net, tweaks its parameters, and deploys the program. Engineers spend their time "monitoring the quality and improving supervision," said Re, the emphasis being on "human understanding" rather than data structures. Overton, and another system, Ludwig, developed by Uber machine learning scientist Piero Molino, are examples of what can be called zero-code deep learning. "The key is what's not required here," Re said.


Hunting and anti-hunting groups locked in tit-for-tat row over data gathering

The data collection practices of the Hunting Office (HO), a central organisation delegated to run the administrative, advisory and supervisory functions of the UK’s hunting associations, and the Countryside Alliance (CA), a campaign organisation with over 100,000 members that promotes rural issues, have been questioned by activists running a website called Hunting Leaks. The website owners said that a monthly round-up of anti-hunting activity – which appears to have been shared via email with hunts across the UK – was passed on to Hunting Leaks by an undisclosed animal rights group. The leaked document, a report on saboteur activity between 14 November and 12 December 2020, lists the names of anti-hunting groups, the names of 30 activists (some of which are referred to multiple times) and information about their vehicles, including registration numbers. It also includes information on the number of anti-hunting activists in attendance, details about their movements and activity on a given hunt day, as well as guidance for how hunt members should approach collecting information and video footage. 


6 ways to bring your spiraling cloud costs under control

The best way to avoid overspending on cloud resources is to know what you need ahead of time. “Scalable cloud services, in theory, have made overprovisioning unnecessary, but old behaviors used in traditional data centers lead to [cloud] resources that are often underutilized or completely idle, which result in unnecessary spend,” wrote Gartner analysts in a December 2020 research note. This may not be music to the ears of anyone who has already made sizable commitments in the scramble to react to the challenges of the pandemic, but it does highlight the importance of right-sizing your cloud environment where possible. “Start with knowing what you spend—not just the invoice you get—but what are you spending on, where are you spending the most, and where are you seeing growth,” said Eugene Khvostov, vice president of product engineering at cost-management software specialist Apptio. For larger organizations, a proven approach is to establish a dedicated cloud center of excellence, tasked with monitoring and governing cloud usage and establishing best practices. For smaller organizations, this responsibility falls on senior members of the IT team, who will be tasked with establishing budgetary guardrails, often linked to longer-term ROI requirements.


Looking beyond Robotic Process Automation

There are a whole host of reasons why a process might not be suitable for automation, but you should consider things such as the time it will take to automate and how many steps in the process require human intervention. Generally speaking, the more logical and easier to define the process is, the faster and easier it is to automate. With a holistic view of processes in your organisation, you will be able to pinpoint which processes can and should be automated, as well as those where people are the key drivers. This will not only be crucial in achieving greater efficiencies, but in demonstrating the benefits to employees and an understanding of where they fit into this new way of working. Consider where upskilling or knowledge sharing might be needed to ensure employees are equipped to support automation. It’s all well and good having technology in in place, but it won’t run effectively without the right people and buy-in alongside it. The relationship between people and technology is going to become even more important as the capabilities of RPA and other machine-based learning advance over the next few years. Just because you can’t fully automate a process, doesn’t mean greater efficiencies can’t be achieved. For 



Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein

Daily Tech Digest - January 31, 2021

How retailers can manage data loss threats during remote work

While an e-commerce store often relies on many software tools to help make day-to-day operations a little easier, it's likely that the number of apps being used has gone up with the increase in remote work. However, separate software tools don't always play nice together, and the level of access and control they have over your data might surprise you. Some even have the ability to delete your data without warning. At least once a year, e-commerce merchants should audit all the applications connected to their online store. Terms and conditions can change so it's best you understand any changes in the last 365 days. List all the pros and cons of each integration and decide if any tradeoffs are worth it. SaaS doesn't save everything.  Software-as-a-service (SaaS) tools will always ensure the nuts and bolts of the platform work. However, protecting all the data stored inside a SaaS or cloud solution like BigCommerce or Shopify rests on the shoulders of users. If you don't fully back up all the content and information in your store, there's absolutely no guarantee it will be there the next time you log in. This model isn't limited to just e-commerce platforms. Accounting software like QuickBooks, productivity tools like Trello and even code repositories like GitHub all follow the same model.


Don't make these cyber resiliency mistakes

Manea begins by sharing the well-worn axiom that defenders must protect every possible opening where attackers only need one way in. If realistic, that truism alone should be enough to replace a prevention attitude with one based on resilience. Manea then suggests caution. "Make sure you understand your organizational constraints—be they technological, budgetary, or even political—and work to minimize risk with the resources that you're given. Think of it as a game of economic optimization." ... Put simply, a digital threat-risk assessment is required. Manea suggests that a team including representatives from the IT department, business units, and upper management work together to create a security-threat model of the organization—keeping in mind: What would an attacker want to achieve?; What is the easiest way for an attacker to achieve it?; and What are the risks, their severity, and their likelihood? An accurate threat model allows IT-department personnel to implement security measures where they are most needed and not waste resources. "Once you've identified your crown jewels and the path of least resistance, focus on adding obstacles to that path," he said.


Researchers have developed a deep-learning algorithm that can de-noise images

In conventional deep-learning-based image processing techniques, the number and network between layers decide how many pixels in the input image contribute to the value of a single pixel in the output image. This value is immutable after the deep-learning algorithm has been trained and is ready to de-noise new images. However, Ji says fixing the number for the input pixels, technically called the receptive field, limits the performance of the algorithm. “Imagine a piece of specimen having a repeating motif, like a honeycomb pattern. Most deep-learning algorithms only use local information to fill in the gaps in the image created by the noise,” Ji says. “But this is inefficient because the algorithm is, in essence, blind to the repeating pattern within the image since the receptive field is fixed. Instead, deep-learning algorithms need to have adaptive receptive fields that can capture the information in the overall image structure.” To overcome this hurdle, Ji and his students developed another deep-learning algorithm that can dynamically change the size of the receptive field. In other words, unlike earlier algorithms that can only aggregate information from a small number of pixels, their new algorithm, called global voxel transformer networks (GVTNets), can pool information from a larger area of the image if required.


Manufacturers Take the Initiative in Home IoT Security

Although ensuring basic connectivity between endpoint devices and the many virtual assistants they connect to would seem to be a basic necessity, many consumers have encountered issues getting their devices to work together effectively. While interoperability and security standards exist, there are none in place that provide consumers the assurance their smart home device will seamlessly and securely connect. To respond to consumer concerns, “Project Connected Home over IP” was launched in December 2019. Initiated by Amazon, Apple, Google and the Zigbee Alliance, this working group focuses on developing and promoting a standard for interoperability that emphasizes security. The project aims to enable communication across mobile apps, smart home devices and cloud services, defining a specific set of IP-based networking technologies for device certification. The goal is not only to improve compatibility but to ensure that all data is collected and managed safely. Dozens of smart home manufacturers, chip manufacturers and security experts are participating in the project. Since security is one of the key pillars of the group’s objectives, DigiCert was invited to provide security recommendations to help ensure devices are properly authenticated and communication is handled confidentially.


Has 5G made telecommunications sustainable again?

The state of the personal communications market as we enter 2021 bears undeniable similarity to that of the PC market (personal computer, if you've forgotten) in the 1980s. When the era of graphical computing began in earnest, the major players at that time (e.g., Microsoft, Apple, IBM, Commodore) tried to leverage the clout they had built up to that point among consumers, to help them make the transition away from 8-bit command lines and into graphical environments. Some of those key players tried to leverage more than just their market positions; they sought to apply technological advantages as well — in one very notable instance, even if it meant contriving that advantage artificially. Consumers are always smarter than marketing professionals presume they are. Two years ago, one carrier in particular (which shall remain nameless, in deference to folks who complain I tend to jump on AT&T's case) pulled the proverbial wool in a direction that was supposed to cover consumers' eyes. The "5G+" campaign divebombed, and as a result, there's no way any carrier can cosmetically alter the appearance of existing smartphones, to give their users the feeling of standing on the threshold of a new and forthcoming sea change.


Learn SAML: The Language You Don't Know You're Already Speaking

SAML streamlines the authentication process for signing into SAML-supported websites and applications, and it's the most popular underlying protocol for Web-based SSO. An organization has one login page and can configure any Web app, or service provider (SP), supporting SAML so its users only have to authenticate once to log into all its Web apps (more on this process later). The protocol has recently made headlines due to the "Golden SAML" attack vector, which was leveraged in the SolarWinds security incident. This technique enables the attacker to gain access to any service or asset that uses the SAML authentication standard. Its use in the wild underscores the importance of following best practices for privileged access management. A need for a standard like SAML emerged in the late 1990s with the proliferation of merchant websites, says Thomas Hardjono, CTO of Connection Science and Engineering at the Massachusetts Institute of Technology and chair of OASIS Security Services, where the SAML protocol was developed. Each merchant wanted to own the authentication of each customer, which led to the issue of people maintaining usernames and passwords for dozens of accounts.


Biometrics ethics group addresses public-private use of facial recognition

“To maintain public confidence, the BFEG recommends that oversight mechanisms should be put in place,” it said. “The BFEG suggests that an independent ethics group should be tasked to oversee individual deployments of biometric recognition technologies by the police and the use of biometric recognition technologies in public-private collaborations (P-PCs). “This independent ethics group would require that any proposed deployments and P-PCs are reviewed when they are established and monitored at regular intervals during their operation.” Other recommendations included that police should only be able to share data with “trustworthy private organisations”, specific members of which should also be thoroughly vetted; that data should only be shared with, or accessible to, the absolute minimum number of people; and that arrangements should be made for the safe and secure sharing and storage of biometric data. The BFEG’s note also made clear that any public-private collaborations must be able to demonstrate that they are necessary, and that the data sharing between the organisations is proportionate.


Security Threats to Machine Learning Systems

The collection of good and relevant data is a very important task. For the development of a real-world application, data is collected from various sources. This is where an attacker can insert fraudulent and inaccurate data, thus compromising the machine learning system. So, even before a model has been created, by inserting a very large chuck of fraudulent data the whole system can be compromised by the attacker, this is a stealthy channel attack. This is the reason why the data collectors should be very diligent while collecting the data for machine learning systems. ... Data poisoning directly affects two important aspects of data, data confidentiality, and data trustworthiness. Many a time the data used for training a system might contain confidential and sensitive information. By poisoning attack, the confidentiality of the data is lost. It is often believed that maintaining the confidentially of data is a challenging area of study by itself, the additional aspect of machine learning makes the task of securing the confidentiality of the data becomes that much more important. Another important aspect affected by data poisoning is data trustworthiness.


Fuzzing (fuzz testing) tutorial: What it is and how can it improve application security?

We know when a programmer is developing code, they have different computations depending upon what the user gives them. So here the program is the maze and then we have, let's just pretend, a little robot up here and input to the program is going to be directions for our robot through the maze. So for example, we can give the robot the directions, I'm going to write it up here, down, left, down, right. And he's going to take two rights, just meaning he's going to go to the right twice. And then he's going to go down a bunch of times. So you can think about giving our little robot this input and robot is going to take that as directions and he's going to take this path through the program. He's going to go down, left, down first right, second right, then a bunch of downs. And when you look at this, we had a little bug here. They can verify that this is actually okay. There's no actual bug here. And this is what's happening when a developer writes a unit test. So what they're doing is they're coming up with an input and they're making sure that it gets the right output. Now, a problem is, if you think about this maze, we've only checked one path through this maze and there's other potential lurking bugs out there.


The three steps for smart cities to unlock their full IoT potential

In theory, if a city applied uniform standards across all of its IoT-connected devices, it could achieve full interoperability. Nevertheless, we believe that cities and regulators should focus on defining common communication standards to support technical interoperability. The reason: Although different versions exist, communications standards are generally mature and widely used by IoT players. In contrast, the standards that apply to messaging and data formats—and are needed for syntactic interoperability—are less mature, and semantic standards remain in the early stages of development and are highly fragmented. Some messaging and data format standards are starting to gain broad acceptance, and it shouldn’t be long before policymakers can prudently adopt the leading ones. With that scenario in mind, planners should ignore semantic standards until clear favorites emerge. Building a platform that works across use cases can improve interoperability. The platform effectively acts as an orchestrator, translating interactions between devices so that they can share data and work. In a city context, a cross-vertical platform offers significant benefits over standardization.



Quote for the day:

"Education makes a people difficult to drive, but easy to lead; impossible to enslave, but easy to govern." -- Lorn Brougham

Daily Tech Digest - January 30, 2021

Internet of Cars: A driver-side primer on IoT implementation

There are millions of internet-connected cars already on the road, albeit mostly with crude subscription services for music and weather apps. With further advances, connection will be much more encompassing, with the average connected car having up to 200 sensors installed, each recording a point of data, minute by minute. The numbers quickly become staggering, and in emergency situations, the need for data agility is apparent. Picture driving on a highway in medium traffic. If someone’s tire blows out half a mile ahead, this information could be quickly conveyed to surrounding cars, warning of the potential for emergency braking. Any DLT solution would have to include a very nimble verification process for all these new packets of information to be brought into and carried by the network. Additionally, because of the computational complexity involved, almost all DLTs today charge a fee for each new transaction brought into the network. In fact, the fee is an integral part of the structure of many of these computational models. This is obviously not going to be workable in a system like urban traffic that would be generating billions of “transactions” every day. The truth is that decentralized data networks were never designed to handle these kinds of massive use-case scenarios.


AI vendors may have to prove systems don't discriminate

Providing proof that AI models are non-discriminatory means AI vendors would have to become much more transparent about how AI models were trained and developed, according to Purcell. "In the bill, it talks about the necessity of understanding what the training data was that went into creating the model," he said. "That's a big deal because today, a lot of AI vendors can just build a model kind of in secret or in the shadows and then put it on the market. Unless the model is being used for a highly regulated use case like credit determination or something like that, very few people ask questions." That could be easier for the biggest AI vendors, including Google and Microsoft, which have invested heavily in explainable AI for years. Purcell said that investment in transparency serves as a differentiator for them now. In general, bias in an AI system largely results from the data the system is trained on. The model itself "does not come with built-in discrimination, it comes as a blank canvas of sorts that learns from and with you," said Alan Pelz-Sharpe, founder and principal analyst at Deep Analysis. Yet, many vendors sell pre-trained models as a way to save their clients the time and know-how it normally takes to train a model. That's ordinarily uncontroversial if the model is used to, say, detect the difference between an invoice and a purchase order, Pelz-Sharpe continued.


Microsoft releases Application Guard for Office to M365 customers

Application Guard for Office isolates certain files opened in the suite's three primary applications: Word, Excel and PowerPoint. Documents obtained from untrusted Internet or intranet domains, files pulled from potentially unsafe areas, and attachments received through the Outlook email client, are opened in a virtualized environment, or sandbox, where malicious code can't wreak havoc. Unlike the much older Protected View, another Office defensive feature — it opens potentially dangerous documents as read-only — files opened in Application Guard can be manipulated. They can be printed, edited and saved. When saved, they remain in the isolation container and when reopened later, again are quarantined in the sandbox. Outdated file types — which can be set by administrators in the File Block feature within Word, Excel and PowerPoint — are also shunted into Application Guard's virtual machine. Application Guard for Office will be available to customers licensing Microsoft 365 E5 or Microsoft 365 E5 Security, and for now, only to those on either the Current Channel or Monthly Enterprise Channel. (Those are the Microsoft 365 update channels that deliver the most frequent refreshes.)


Digital nomads and "bleisure" define the new high-tech take on work trips

Many organizations have adopted remote work policies amid a modern plague. While some companies have brought telecommuters back to the traditional office, others have many long-term commitments to remote work. Ernest Lee, managing director development and investments, Americas, with citizenM hotels, similarly alluded to remote work-enabled "nomadic behavior" among professionals. The company recently announced a global passport; a subscription service allowing remote workers with a penchant for frequent traveling the ability to stay in any of the citizenM's 21 hotels around the globe. "We certainly think that this new sort of lifestyle will attract a certain type of person that wants to also blend in their personal interests and passions [with] not having to be tied down so much to a fixed location," Lee said. The company also offers a corporate subscription providing organizations with access to hotel rooms and meeting room spaces at a fixed price. Lee explained that this package is designed for remote teams who are no longer sharing "the same co-located space." To enhance the traditional business travel experience, hotels are incorporating a wide range of technologies, in-app features, Internet of Things (IoT) capabilities, and more.


'Clone Firm' Fraudsters Stealing Millions From UK Investors

A clone firm is a fake entity created by fraudsters that uses the name, address and Firm Reference Number - a unique identifier assigned to every financial or investment firm in the U.K and issued by the Financial Conduct Authority - of a legitimate organization, according to the alert. In some cases, the scammers will clone or spoof the entire website of a legitimate firm. Once these fake and spoofed websites are created, the fraudsters then send sales and marketing materials to would-be investors that appear to originate from legitimate firms. The scammers also advertise on social media, according to the alert. The fraudsters use phishing emails and social engineering techniques to lure victims, and their use of the legitimate sales materials gives the scheme a sheen of authenticity. Once a connection is established, the fraudsters attempt to get victims to send money to the cloned firm, the NCA notes. "Fraudsters use literature and websites that mirror those of legitimate firms, as well as encouraging investors to check the Firm Reference Number on the FCA Register to sound as convincing as possible," says Mark Steward, executive director of enforcement and market oversight for the Financial Conduct Authority.


DDoS Attacks Reach Over 10 Million in 2020

Richard Hummel, threat intelligence lead at NETSCOUT, said, “It is no coincidence that this milestone number of global attacks comes at a time when businesses have relied so heavily on online services to survive. Threat actors have focused their efforts on targeting crucial online platforms and services such as healthcare, education, financial services and e-commerce that we all rely on in our daily lives. As the COVID-19 pandemic continues to present challenges to businesses and societies around the world, it is imperative that defenders and security professionals remain vigilant to protect the critical infrastructure that connects and enables the modern world.” DDoS attack count, bandwidth, and throughput all saw significant increases since the start of the global COVID-19 pandemic. For instance, attack frequency rose 20% year over year, but that includes the pre-pandemic months of January, February, and most of March. For the second half of 2020, which was entirely pandemic-ridden, attacks rose 22% year over year. As cybercriminals quickly exploited pandemic-driven opportunities, we saw another kind of ‘new normal.’ Monthly DDoS attacks regularly exceeded 800,000 starting in March, as the pandemic lockdown took effect. 


IoT at the edge: magic won’t happen automatically

Creating more value at the edge Dheeraj Remella, Chief Product Officer at VoltDB, notes the uncertainty around many edge and IoT business cases. He argues, “Telcos spend a lot of time talking about moving up the value chain beyond connectivity, and this is a great opportunity. Differentiation is based on sets of complementary features, contributed by an ecosystem, that create capabilities rather than individual features, which as stand-alones are not compelling. The owner of the platform that delivers that joint capability holds the keys to the digital kingdom.” As Remella points out, decisioning at low-millisecond speed is one thing on a private network within an industrial plant, but another ball game when the edge is hugely distributed, such as a wind farm over hundreds or thousands of acres, or for smart agriculture or an electricity grid. He says that often, to cut down processing times at the edge, companies take what he calls a “hyper-contextualised” approach – automating decisions based on data about a single entity or an isolated set of events. This limits its usefulness, just making existing processes digital (digitising), rather than using advances in technology to do things we’ve never been able to do before (digitalising), which means doing thing differently – changing processes.


Sorry, Data Lakes Are Not “Legacy”

From a technical perspective, compute, and storage is intended to be loosely coupled architecture. As a result, this is a benefit for warehouses. However, the benefit is not just for warehouses. Any modern data architecture, by design, depends on a loosely coupled separation of compute and storage to deliver an efficient, scalable, and flexible solution. The fact that data warehouse vendors are introducing separate compute and storage is not innovation compared to data lakes; it is achieving parity with data lakes. The evolution of separate compute and storage in warehouses brings them in line with the architecture employed by productive data lakes via on-demand SQL query services. In a post called When to Adopt a Data Lake — and When Not to, a dig at data lakes was that they could not scale compute easily or on-demand; Some solutions architects have proposed data lakes to “separate compute from storage” in a traditional data warehouse. But they’re missing the point: You want the ability to scale compute easily and on-demand. A data lake isn’t going to give you this; what you need is a data warehouse that can provision and suspend capacity whenever you need it.


AI, machine learning effective in cyber defence, but can also present challenges

"Antivirus technology, for example, operates a strict ‘yes or no’ policy as to whether a file is potentially malicious or not. It’s not subjective, through a strict level of parameters, something is either considered a threat, or not." he says. "The AI can quickly determine whether it’s going to crash the device, lock the machine, take down the network and as such, it is either removed or allowed. "It is important to note that VIPRE uses AI and ML as key components in their email and endpoint security services for example as part of their email security attachment sandboxing solution where an email attachment is opened and tested by AI in an isolated environment away from a customer’s network," Paterson adds. "So while AI might not be an ideal method for preventing accidental data leakage through email, it does have an important part to play in specific areas such as virus detection, sandboxing and threat analysis." Paterson says with so much reliance on email within business practices, accidental data leakage is an inevitable risk. "The implications of reputational impact, compliance breach and associated financial damage can be devastating. A cyber-aware culture with continuous training is essential, and so is the right technology," he says.


Does CI/CD impact telecom operations?

In the standard microservice code model that underpins cloud-native software, every time a common code software component is improved, it will change all network systems that use that standard code. This approach can bring lightning-fast agility and innovation but leaves today's legacy bi-annual software test and validate processes entirely unfit for purpose. The telecom CI/CD philosophy means that software is developed, delivered, tested, accepted, and brought into operation incrementally at a far higher cadence than previously in a traditional service provider environment. Further, it creates a significant software development volume that needs validation on an increasingly dynamic network. This approach implies that continuous software validation and continuous testing must accompany continuous software delivery and deployment. These requirements demand a new agile way of working between the network operator, its software suppliers, and vendors. Essentially, the merging of Dev and Ops as in the IT world is now a must for the telecom context where the 'Dev' from vendors needs to seamlessly merge and receive feedback from the 'Ops' on the operator side of the firewall. This evolution requires a transformation on both the vendor side as well as the operator side.



Quote for the day:

"Entrepreneurship is the last refuge of the trouble making individual." -- Natalie Barney

Daily Tech Digest - January 29, 2021

Expert: Agile data-driven decision-making key to growth

"You can't achieve agility, and you can't be adaptive unless you empower your business users with as much self-service analytics and business intelligence and reporting as they can consume," Evelson said. "Self-service is really the only way to become agile and adaptive." That, however, is linked to data governance, which is also imperative to agile data-driven decision-making. "There is a very fine line between too much self-service and not enough governance, versus too much governance and not enough self-service," Evelson added. "Hopefully, there is a middle ground between the two, which we call Goldilocks data governance." All of the competencies together, meanwhile, enable an organization to be agile through what Evelson terms multi-modal analytics and reporting. They empower organizations to do descriptive analytics through dashboards and reports, diagnostic and predictive analytics to get insights, and ultimately prescriptive and actionable analytics to make decisions and trigger actions. And should organizations fail to become agile and adapt to constant change, they risk irrelevancy and ultimately insolvency. Forty years ago, the average lifespan of companies in the S&P 500 was about 30 years, Evelson said.


The Brain Is Neither a Neural Network nor a Computer

Autonomy is the idea that the brain is self-governing, receptive to the environment, but always in control. Somatic disorders ranging from improper sugar levels and hormone imbalances to diseases such as malaria or syphilis can cause mental dysfunction. Some individuals are placed in mental hospitals when correcting an underlying disorder would actually fix the problem. At the simplest level, no amount of mental determination would make you a world-class athlete if you did not have the right type of muscle fibers or hand-eye coordination. You cannot flap your arms and fly—the aerodynamics does not allow it. Paganini could only be the legendary violinist he was because of his flexibility. No amount of musicianship could provide that ability. Cognitive processes are embodied. They emerge from the interaction between physical organisms and their environment, not just their brains. For example, there is evidence that the nature of your gut bacteria can cause anxiety, stress, and even depression. Replacing a diseased organ with a healthy one can increase mental functioning. A kidney transplant will help remove poisons from the blood such as urea or ammonia which will increase brain health.


The state of corporate legal departments and the role of the Chief Legal Officer

The survey affirms we are in the “age of the CLO.” With 78 percent of respondents reporting to the CEO, the overall trend remains very positive. Further, while CLOs still spend around one quarter of their time providing legal advice, they also spend a significant amount of time on board matters and governance issues, contributing to strategy development, and advising other executives on non-legal issues. The survey found that 46 percent of CLOs are responsible for their company’s data privacy function, reflecting the growing integration of legal in business strategy and technology policy. In the order of functions reporting to the Chief Legal Officer, only compliance (74 percent) outranks privacy. CLOs are also increasingly engaging with environmental, social, and governance issues. This includes diversity and inclusion (D&I). A full 72.7 percent of CLOs expect diversity and inclusion specifically to accelerate in 2021. Encouragingly, even despite COVID-19, 32 percent of law departments plan to take on more lawyers in 2021, a slight increase over 30 percent from 2020.

Defense Against Vulnerabilities in the Cloud – Is It Too Late?

Apart from the traditional challenges around access management, data pilferage and threats from data communication with third party applications is gaining prominence. Communication with third party applications has found increased traction through APIs, which are increasingly being targeted by threat actors. Further, misconfigurations and policy violations in cloud assets create potential vulnerabilities and backdoors leading to risk of compromise. This is primarily due to the policies of some companies to not change the default security settings on their cloud workloads. These cloud vulnerabilities are accentuated by the increasing number of connected systems and their dependencies. The genesis of many vulnerabilities boil down to access and privilege management. Organizations need to plan for a deep inspection and vulnerability management system as part of their devsecops pipeline for building scalable cloud native applications. A comprehensive vulnerability management system goes a long way to enable organizations to effectively manage and minimizing their threat attack surface.


How to build a trustworthy and connected future

More broadly, big(ger) data from personal, commercial and government sources has the potential to address various challenges related to the Sustainable Development Goals. For instance, the Humanitarian and Resilience Investing Initiative aims to fill critical gaps in the available data that are preventing investors from accessing more humanitarian and resilience investing (HRI) opportunities. The pandemic has exposed and exacerbated existing gaps and inequalities, notably almost half of the global population remain offline and broadband services are too expensive for 50% of the population in developed countries. These “connectivity deserts” hamper access to health, education and economic inclusion. In a bid to improve access to the digital economy, during The Davos Agenda, the Forum launched the Essential Digital Infrastructure and Services Network, or EDISON Alliance, tasked with working to accelerate digital inclusion Meanwhile, in metropolises around the globe, which account for nearly two-thirds of CO2 emissions, smart energy infrastructure connected through data and digitalization is central to transitioning to “net zero” cities.


2020 Marked a Renaissance in DDoS Attacks

The sheer quantity of attacks in 2020 was surprising, Kaczmarek says. "We always expect the number of attacks to increase year over year and quarter over quarter, but we didn't expect that the quantity would increase by over 150%," he says. "This truly reflects the impact of the pandemic and the challenging precedent the 'new normal' has set for cybersecurity." The number of DDoS attacks that involved two or more vectors increased from 40% in 2019 to 72% in 2020, Kaczmarek added. "This means that the attackers as well as the tools they are using are improving," he says. According to Neustar, while the use of DDoS to try and extort ransoms is not new, these attacks grew in persistence, sophistication, and targeting in 2020. Cyber extortionists purporting to belong to well-known nation-state groups went after organizations in industries they have not regularly targeted previously, such as financial services, government, and telecommunications. "RDDoS attacks surged in Q4 2020 as groups claiming to be Fancy Bear, Cozy Bear, and the Lazarus Group attempted to extort organizations around the world," says Omer Yoachimik, product manager, DDoS protection at Cloudflare, another vendor that observed the same trend.


A better kind of cybersecurity strategy

The core of the matter involves deterrence and retaliation. In conventional warfare, deterrence usually consists of potential retaliatory military strikes against enemies. But in cybersecurity, this is more complicated. If identifying cyberattackers is difficult, then retaliating too quickly or too often, on the basis of limited information such as the location of certain IP addresses, can be counterproductive. Indeed, it can embolden other countries to launch their own attacks, by leading them to think they will not be blamed. “If one country becomes more aggressive, then the equilibrium response is that all countries are going to end up becoming more aggressive,” says Alexander Wolitzky, an MIT economist who specializes in game theory. “If after every cyberattack my first instinct is to retaliate against Russia and China, this gives North Korea and Iran impunity to engage in cyberattacks.” But Wolitzky and his colleagues do think there is a viable new approach, involving a more judicious and well-informed use of selective retaliation. “Imperfect attribution makes deterrence multilateral,” Wolitzky says. “You have to think about everybody’s incentives together. Focusing your attention on the most likely culprits could be a big mistake.”


US, China or Europe? Here's who is really winning the global race for AI

On almost all metrics, therefore, the EU seems to be taking a backseat; and according to the researchers, there is no doubt that this is due to stringent regulations that are in place within the bloc. "Many in Europe do not trust AI and see it as technology to be feared and constrained, rather than welcomed and promoted," concludes the report, recommending that the EU change its regulatory system to be "more innovation-friendly". The General Data Protection Regulation (GDPR), say the researchers, limits the collection and use of data that can foster developments in AI. Proposals for a Data Governance Act, while encouraging the re-use of public sector data, also restrains the transfer of some information; and by creating European data spaces, the regulation could inhibit global partnerships. Recent reports show that the last year has seen almost a 40% increase in GDPR fines issued by the EU compared to the previous 20 months, reaching a total of $332 million in fines since the new laws started applying. In that context, it is not rare to find that some firms are deterred from developing AI systems altogether, out of fear of receiving a fine – even for the most well-intentioned innovations.


A Guide to Find the Right IoT Module for Your Project

As more small and new module providers emerge into the IoT market, many cheaper IoT modules are becoming available to customers with extremely attractive tag price. If we simply look at the initial deployment cost of using cheaper modules, it might look like that it saves a lot of money for the customers. But is the quality of these modules guaranteed? The process of developing a new product and making it deliverable to the market takes long and is costly. Low-quality modules always accompany a higher risk of malfunction and, to the worst extent, result in the failure of the whole project. This will not help IoT companies to generate expected project income, in reverse, it causes a greater loss in investment. From a long-term perspective, even if the product was launched to the market, the unstable performance of the module is likely to cause unwanted surprises and require frequent maintenances. This will not be simply a higher operating cost to the business, it will also harm the reputation of the brand and damage the customers’ loyalty. For the long-term growth of the business, choosing a reliable partner and quality-guaranteed module products is wise and worthy.


Researchers: Beware of 10-Year-Old Linux Vulnerability

The vulnerability, called "Baron Samedit" by the researchers and officially tracked as CVE-2021-3156, is a heap-based buffer overflow in the Sudo utility, which is found in most Unix and Linux operating systems. Sudo is a utility included in open-source operating systems that enables users to run programs with the security privileges of another user, which would them give them administrative – or superuser - privileges. The bug, which appears to have been added into the Sudo source code in July 2011, was not detected until earlier this month, Qualys says. "Qualys security researchers have been able to independently verify the vulnerability and develop multiple variants of exploits and obtain full root privileges on Ubuntu 20.04 (Sudo 1.8.31), Debian 10 (Sudo 1.8.27), and Fedora 33 (Sudo 1.9.2). Other operating systems and distributions are also likely to be exploitable," the researchers say. After Qualys notified the authors of Sudo, a patch was included in version 1.5.5p2, published this week. Qualys and the Sudo authors are urging Linux and Unix users to immediately patch systems. Rob Joyce, who was recently named director of the National Security Agency's Cybersecurity Directorate, also flagged the alert on Twitter.



Quote for the day:

"Believe those who are seeking the truth. Doubt those who find it." -- Andre Gide

Daily Tech Digest - January 28, 2021

Engaging Employees to Accelerate Digital Banking Transformation

Many financial institutions are investing heavily in new technologies and processes to support their digital banking transformation goals. Research by the Digital Banking Report has found that banks and credit unions have increased investment in digital transformation in each of the past four years. There is no doubt that these investments are justified given the flight to digital by consumers and the game-changing technology that can support digital customer experience improvements. Unfortunately, with such a focus on data, analytics, technology and systems, most firms ignore the need to invest in employees to make sure they maximize the value of the new tools being deployed. Beyond open communication around how employees can be a part of the digital banking transformation process, it is important to invest in training the people to ensure that the digital banking transformation efforts succeed. If you don’t, it’s like buying a new car but failing to fill the gas tank (or charge the batteries). To respond to the need to reskill and upskill current employees, new models of managing learning and development have emerged. More than replicating legacy training methods, new learning officer positions have been created with the responsibility of not only creating ongoing learning opportunities, but also supporting cultural transformation.


Here’s why upskilling is crucial to drive the post-COVID recovery

We have a pressing societal problem: how to equip people with the skills they need to participate in the economy – now and in the future. As outlined in the World Economic Forum’s latest Future of Jobs Report, half of all employees around the world will need reskilling by 2025 – and that number doesn’t include all the people who are currently not in employment. If we don’t act now, this skills gap will only widen. With challenges come opportunities. Crisis events, like the pandemic, can and should shape economic thinking and represent a rare but narrow window of opportunity to reflect, reimagine, and reset priorities. So let’s seize this opportunity. We’re calling on governments, business leaders, and educators to join us in a global movement for upskilling. As you’ll see in our new report – Upskilling for Shared Prosperity – published as part of Davos Agenda Week to mark the first anniversary of the World Economic Forum’s Reskilling Revolution Platform, there’s a clear social and economic case for upskilling. If we commit to giving all people opportunities to build the skills they will need to fully participate in the future workplace, it will, in turn, lead to a prosperity dividend.


Law enforcement takes over Emotet, one of the biggest botnets

According to Europol, Emotet's infrastructure consisted of several hundred servers located across the world and serving different purposes, including making the botnet more resilient to takeover attempts. Law enforcement agencies had to work together to develop a strategy that involved gaining control of the infrastructure from the inside and redirecting victims to servers under their own control. As part of the investigation, the Dutch National Police seized data from the servers used by Emotet, including a list of stolen email credentials abused by the botnet. The agency set up a web page where users can check if their email address was among those affected. The information about infected computers that was gathered during the operation was also shared with national CERTs so the victims can be identified and contacted. "Only time will tell if the takedown will have long-term impact to Emotet operations," Jason Passwaters, COO of security firm Intel 471, tells CSO. "These groups are sophisticated and will have baked in some sort of recovery. Emotet itself does not appear to have any sort of inherent recovery mechanism, but a lot of the infected machines will have other malware installed as well, such as Qbot, Trickbot or something else. ..."


Top 5 Evolving Cybersecurity Threats to Cloud Computing in 2021

According to the Sophos Threat Report of 2020, misconfigurations can drive numerous data breaching incidents. Businesses are integrating themselves with cloud computing which guarantees the possibilities of cloud jacking emergence. Trend Marco predicts that code injection attacks can be utilized to attack cloud platforms. These attacks can be carried out through third-party libraries, from SQL injection and cross-site scripting. Attackers inject malicious code through third-party libraries and ensure that the code is downloaded and executed by individuals unintentionally. According to typical public cloud vendors, they are only responsible for the security of their infrastructure and individuals are responsible for protecting their data. ... Social engineering acquires phishing scams to steal user credentials for cloud-service tracks and on-premises attacks. Do you know that 78% of data breaching incidents that occurred during 2019 were related to phishing? This percentage increased in 2020. Innovative phishing attempts are launched through cloud applications rather than traditional emails. Phishing kits make it easier for cybercriminals to carry out illicit activities. Phishing kits require a very small amount of technical skills to carry out phishing operations.


What Is Robomorphic Computing?

A robot’s operation is a three-step process: gathering data using sensors or cameras; use mapping and localisation techniques to understand the environment; plotting the course of action. Advances in embedded vision and SLAM technology make data gathering and localisation easy. However, all these steps take a lot of time, especially when calculations are done on CPUs. Previously, the researchers have investigated the software side to develop an efficient algorithm to speed up robots. The MIT folks concluded it’s time to look beyond software. Hardware acceleration is the use of a specialised hardware unit to do certain computing tasks more efficiently. While Graphic Processing Units or GPUs have been availed for such tasks, the application is limited since the use cases are different for different robots. Hence, the researchers at MIT developed robomorphic computing to devise a customised hardware unit for individual robots. It considers the physical parameters of the robot and the tasks it needs to perform and translates it into mathematical matrices to design a specialised hardware architecture. The resulting chip design is unique to the robot and maximises its efficiency.


Digital Identity Is the New Security Control Plane

Digital identity — in the form of trusted contextual data defining who is accessing a system and how — provides this control plane. Users are already providing identity (and likely at multiple points). Systems are already consuming it — in the case of software-as-a-service (SaaS) environments, it may be one of the few configurable security controls available — but the decoupling of security from location and IP address is present in many other solutions. It can be tailored to an organization's needs and be risk-sensitive, with different methods and phases required, depending on the resource accessed. Even better, it's a control plane that can and should be implemented in a phased approach and provides a path to a zero-trust network architecture. The steps to building this are conceptually simple, and we can do extensive preparation. First, ensure even before you implement that the technologies you are investing in are identity-aware and able to make differentiated security decisions in the data plane based on that identity. This must extend to SaaS applications — one of the largest benefits of using identity as your control plane is the ability to bring these into the fold, as it were, and to match them to your security model. Second, consolidate identity to a single "source of trust" — that is, a single secure, consistent, and accurate repository for identity.

Data Privacy Day 2021: What to consider in the wake of Covid-19

The exit of the UK from the EU means that companies across the country that deal with Europe need to take extra steps to ensure correct compliance. According to Rich Vibert, CEO and co-founder of Metomic, this can be aided by considering this aspect at the start of any deployment. “This Data Privacy Day, we must confront the fact that UK companies aren’t equipped to protect their data now that we’ve Brexited,” said Vibert. “A large proportion of the responsibility for this lies with the UK government, whose failure to deliver guidance during the transition period resulted in businesses adopting a ‘wait and see’ approach. “Businesses need to take charge; proactively adapting compliance to UK-GDPR and analysing how a lack of adequacy could impact them and their customers. Only by doing so will they avoid the financial and reputational damage caused by non-compliance. “Regardless of whether the government holds the blame for the current status quo or not, leaders must see this as an opportunity to reset their approach to data protection. This means putting the privacy, compliance and security of data at the heart of their business strategy and using technology to facilitate this.


Marry IGA with ITSM to avoid the pitfalls of Identity 2.0

IAM solutions are too coarse-grained to handle such moves, in my experience. That forces admins to do IGA the hard way – taking care of onboarding, job changes, terminations, and so forth by hand. In addition to being a time- and labor-intensive hassle, manual IGA leads to numerous identity management errors. All too often, manual IGA grants access to new applications or information sources but doesn’t take away old ones, which exposes companies to security and compliance risks. Manual processes for managing patches, password resets, software updates, and more also increase risks. You don’t want an executive accessing highly confidential information from an app that doesn’t require two-factor authentication on a laptop that hasn’t been updated. But if IGA is managed from a spreadsheet, that’s exactly what happens. The employee lifecycle is only one of the IGA challenges that Identity 2.0 systems are not well-positioned to address. Take for example the expense and integration hassle of onboarding traditional IAM into manual IGA systems. The typical IGA system, like most enterprise systems, exists in a silo. Implementing manual IGA on systems such as HR, CRM, finance, and operations means writing numerous custom integrations.


What Happens If a Cloud Provider Shuts You Out?

There are other reasons, such as sudden outages or the shutdown of a cloud provider, for organizations to create plans to salvage their code and get back online quickly, Valentine says. Heikki Nousiainen, CTO at Aiven, also says the threat of getting cut off by all three major cloud providers is very low for most other businesses -- yet companies may want to maintain the ability to move code around for disaster recovery needs. “They are rare, but we sometimes see these big outages touch Google, AWS, or Azure in one or more regions,” he says. Companies with very time-sensitive online business needs, for example, may want to maintain the ability to roll over to a backup elsehwere, Nousiainen says. He recommends exploring true multi-cloud options where companies can select providers freely without being locked in, and also going with open source technology because that lets the same set of services run in different clouds. Some of these options can come at a bit of premium, though Nousiainen says the overall benefits may be worth it. “There are costs associated but typically when that investment goes into preparing infrastructure as a code it also helps for many other problems such as disaster recovery.”


Dead System Admin's Credentials Used for Ransomware Attack

In a case study published Tuesday, the researchers say the system administrator had died three months previously, but the account remained active. The researchers note that there are numerous reasons why the account could have been left open, including the possibility that the system admin had helped with the initial setup of the targeted firm's services. "Closing down the account would have stopped those services working, so keeping the account going was, we'd imagine, a convenient way of letting the dead person's work live on," according to the report. The Sophos report also notes that these types of "ghost" accounts are an increasing problem for security teams, especially if other parts of the company forget that they remain active after an employee has left or died. "In this case, the active use of the account of a recently deceased colleague ought to have raised suspicions immediately - except that the account was deliberately and knowingly kept going, making its abuse look perfectly normal and therefore unexceptionable, rather than making it seem weirdly paranormal and therefore raising an alarm," according to Sophos.



Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability." -- Med Jones