Daily Tech Digest - November 25, 2020

To do in 2021: Get up to speed with quantum computing 101

For business leaders who are new to quantum computing, the overarching question is whether to invest the time and effort required to develop a quantum strategy, Savoie wrote in a recent column for Forbes. The business advantages could be significant, but developing this expertise is expensive and the ROI is still long term. Understanding early use cases for the technology can inform this decision. Savoie said that one early use for quantum computing is optimization problems, such as the classic traveling salesman problem of trying to find the shortest route that connects multiple cities. "Optimization problems hold enormous importance for finance, where quantum can be used to model complex financial problems with millions of variables, for instance to make stock market predictions and optimize portfolios," he said. Savoie said that one of the most valuable applications for quantum computing is to create synthetic data to fill gaps in data used to train machine learning models. "For example, augmenting training data in this way could improve the ability of machine learning models to detect rare cancers or model rare events, such as pandemics," he said. 


SmartKey And Chainlink To Collaborate In Govt-Approved Blockchain Project

Chainlink is the missing link in developing and delivering a virtually limitless number of smart city integrations that combine SmartKey’s API and blockchain-enabled hardware with real world data and systems to harness the power of automated data-driven IoT applications with tangible value. The two protocols are complementary: The SmartKey protocol manages access to different physical devices across the Blockchain of Things (BoT) space (e.g. opening a gate), while the Chainlink Network allows developers to connect SmartKey functionalities with different sources of data (e.g. weather data, user web apps). The integration focuses on connecting all the data and events sourced and delivered by the Chainlink ecosystem to the SmartKey connector, which then turns that data (commands issued by Ethereum smart contracts) into instructions for IoT devices (e.g., active sensors GSM — GPS). Our connectors can also deliver information to Chainlink oracles confirming these real world instructions were carried out (e.g. gate was opened), potentially leading to additional smart contract outputs. The confirmation of service delivery is a “contract key” that connects both ecosystems into one “world” and relays an Ethereum action to IoT devices.


DevOps + Serverless = Event Driven Automation

For the most part, Serverless is seen as Function as a Service (FaaS). While it is definitely true that most Serverless code being implemented today is FaaS, that’s not the destination, but the pitstop. The Serverless space is still evolving. Let’s take a journey and explore how far Serverless has come, and where it is going. Our industry started with what I call “Phase 1.0”, when we just started talking or hearing about Serverless, and for the most part just thought about it as Functions – small snippets of code running on demand and for a short period of time. AWS Lambda made this paradigm very popular, but it had its own limitations around execution time, protocols, and poor local development experience. Since then, more people have realized that the same serverless traits and benefits could be applied to microservices and Linux containers. This leads us into what I’m calling the “Phase 1.5”. Some solutions here completely abstract Kubernetes, delivering the serverless experience through an abstraction layer that sits on top of it, like Knative. By opening up Serverless to containers, users are not limited to function runtimes and can now use any programming language they want.


Self-documenting Architecture

A self-documenting architecture would reduce the learning curve. It would accentuate poor design choices and help us to make better ones. It would help us to see the complexity we are adding to the big picture as we make changes in the small and help us to keep complexity lower. And it would save us from messy whiteboard diagrams that explain how one person incorrectly thinks the system works . ... As software systems gradually evolve on a continual basis, individual decisions may appear to make sense in isolation, but from a big picture architectural perspective those changes may add unnecessary complexity to the system. With a self-documenting architecture, everybody who makes changes to the system can easily zoom out to the bigger picture and consider the wider implications of their changes. One of the reasons I use the Bounded Context Canvas is because it visualises all of the key design decisions for an individual service. Problems with inconsistent naming, poorly-defined boundaries, or highly-coupled public interfaces jump out at you. When these decisions are made in isolation they seem OK, it is only when considered in the bigger picture that the overall design appears sub-optimal.


Is graph technology the fuel that’s missing for data-based government?

Another government context for use of graphs is global smart city projects. For instance, in Turku, Finland, graph databases are being deployed to leverage IoT data to make better decisions about urban planning. According to Jussi Vira, CEO of Turku City Data, the IT services company that is assisting the city of Turku to achieve its ideas: “A lack of clear ways to bridge the gap between data and business problems was inhibiting our ability to innovate and generate value from data”. By deploying graphs, his team is able to represent many real-world business problems as people, objects, locations and events, and their interrelationships. Turku City Data found graphs represent data in the same way in which business problems are described, so it was easier to match relevant datasets to concrete business problems. Adopting graph technology has enabled the city of Turku to deliver daily supplies to elderly citizens who cannot leave their homes because of the Covid-19 pandemic. The service determines routes through the city that optimise delivery speed and minimise transportation resources while maintaining unbroken temperature-controlled shipping requirements for foodstuffs and sensitive medication. 


The Relationship Between Software Architecture And Business Models (and more)

A software architecture has to implement the domain concepts in order to deliver the how of the business model. There are an unlimited number of ways to model a business domain, however. It is not a deterministic, sequential process. A large domain must be decomposed into software sub-systems. Where should the boundaries be? Which responsibilities should live in each sub-system? There are many choices to make and the arbiter is the business model. A software architecture, therefore, is an opinionated model of the business domain which is biased towards maximising the business model. When software systems align poorly with the business domain, changes become harder and the business model is less successful. When developers have to mentally translate from business language to the words in code it takes longer and mistakes are more likely. When new pieces of work always slice across-multiple sub-systems, it takes longer to make changes and deploy them. It is, therefore, fundamentally important to align the architecture and the domain as well as possible. 


In 2021, edge computing will hit an inflection point

Data center marketplaces will emerge as a new edge hosting option. When people talk about the location of "the edge," their descriptions vary widely. Regardless of your own definition, edge computing technology needs to sit as close to "the action" as possible. It may be a factory floor, a hospital room, or a North Sea oil rig. In some cases, it can be in a data center off premises but still as close to the action as makes sense. This rules out many of the big data centers run by cloud providers or co-location services that are close to major population centers. If your enterprise is highly distributed, those centers are too far. We see a promising new option emerging that unites smaller, more local data centers in a cooperative marketplace model. New data center aggregators such as Edgevana and Inflect allow you to think globally and act locally, expanding your geographic technology footprint. They don't necessarily replace public cloud, content delivery networks, or traditional co-location services — in fact, they will likely enhance these services. These marketplaces are nascent in 2020 but will become a viable model for edge computing in 2021.


Why Security Awareness Training Should Be Backed by Security by Design

The concepts of "safe by design" or "secure by design" are well-established psychological enablers of behavior. For example, regulators and technical architects across the automobile and airlines industries prioritize safety above all else. "This has to emanate across the entire ecosystem, from the seatbelts in vehicles, to traffic lights, to stringent exams for drivers," says Daniel Norman, senior solutions analyst for ISF and author of the report. "This ecosystem is designed in a way where an individual's ability to behave insecurely is reduced, and if an unsafe behavior is performed, then the impacts are minimized by robust controls." As he explains, these principles of security by design can translate to cybersecurity in a number of ways, including how applications, tools, policies, and procedures are all designed. The goal is to provide every employee role "with an easy, efficient route toward good behavior." This means sometimes changing the physical office environment or the digital user interface (UI) environment. For example, security by design to improve phishing susceptibility might include implementing easy-to-use phishing reporting buttons within employee email clients. Similarly, it might mean creating colorful pop-ups in email platforms to remind users not to send confidential information.


Tech Should Enable Change, Not Drive It

Technology should remove friction and allow people to do their jobs, while enabling speed and agility. This means ensuring a culture of connectivity where there is trust, free-flowing ideation, and the ability to collaborate seamlessly. Technology can also remove interpersonal friction, by helping to build trust and transparency — for example, blockchain and analytics can help make corporate records more trustworthy, permitting easy access for regulators and auditors that may enhance trust inside and outside the organization. This is important; one study found that transparency from management is directly proportional to employee happiness. And happy employees are more productive employees. Technology should also save employees time, freeing them up to take advantage of opportunities for human engagement (or, in a pandemic scenario, enabling virtual engagement), as well as allowing people to focus on higher-value tasks. ... It’s vital that businesses recognize diversity and inclusion as a moral and a business imperative, and act on it. Diversity can boost creativity and innovation, improve brand reputation, increase employee morale and retention, and lead to greater innovation and financial performance.


Researchers bring deep learning to IoT devices

The customized nature of TinyNAS means it can generate compact neural networks with the best possible performance for a given microcontroller – with no unnecessary parameters. “Then we deliver the final, efficient model to the microcontroller,” say Lin. To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight – instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller. “It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine. The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time. “We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.”



Quote for the day:

"Empowerment is the magic wand that turns a frog into a prince. Never estimate the power of the people, through true empowerment great leaders are born." -- Lama S. Bowen

Daily Tech Digest - November 24, 2020

Why securing the DNS layer is crucial to fight cyber crime

When left insecure, DNS servers can result in devastating consequences for businesses that fall victim to attack. Terry Bishop, solutions architect at RiskIQ, says: “Malicious actors are constantly looking to exploit weak links in target organisations. A vulnerable DNS server would certainly be considered a high-value target, given the variety of directions that could be taken once compromised. “At RiskIQ, we find most organisations are unaware of about 30% of their external-facing assets. That can be websites, mail servers, remote gateways, and so on. If any of these systems are left unpatched, unmonitored or unmanaged, it presents an opportunity for compromise and further potential exploit, whether that is towards company assets, or other more valuable infrastructure such as DNS servers are dependent on the motives of the attacker and the specifics of the breached environment.” Kevin Curran, senior member at the Institute of Electrical and Electronics Engineers (IEEE) and professor of cyber security at Ulster University, agrees that DNS attacks can be highly disruptive. In fact, an improperly working DNS layer would effectively break the internet, he says.


The Dark Side of AI: Previewing Criminal Uses

Criminals' Top Goal: Profit, If that's the high level, the applied level is that criminals have never shied away from finding innovative ways to earn an illicit profit, be it through social engineering refinements, new business models or adopting new types of technology. And AI is no exception. "Criminals are likely to make use of AI to facilitate and improve their attacks by maximizing opportunities for profit within a shorter period, exploiting more victims and creating new, innovative criminal business models - all the while reducing their chances of being caught," according to the report. Thankfully, all is not doom and gloom. "AI promises the world greater efficiency, automation and autonomy," says Edvardas Šileris, who heads Europol's European Cybercrime Center, aka EC3. "At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology." ... Even criminal uptake of deepfakes has been scant. "The main use of deepfakes still overwhelmingly appears to be for non-consensual pornographic purposes," according to the report. It cites research from last year by the Amsterdam-based AI firm Deeptrace , which "found 15,000 deepfake videos online..."


Flash storage debate heats up over QLC SSDs vs. HDDs

Rosemarin said some vendors front end QLC with TLC flash, storage class memory or DRAM to address caching and performance issues, but they run the risk of scaling problems and destroying the cost advantage that the denser flash technology can bring. "We had to launch a whole new architecture with FlashArray//C to optimize and run QLC," Rosemarin said. "Otherwise, you're very quickly going to get in a position where you're going to tell clients it doesn't make sense to use QLC because [the] architecture can't do it cost-efficiently." Vast Data's Universal Storage uses Intel Optane SSDs, built on faster, more costly 3D XPoint technology, to buffer writes, store metadata and improve latency and endurance. But Jeff Denworth, co-founder and chief marketing officer at the startup, said the system brings cost savings over alternatives through better longevity and data-reduction code, for starters. "We ask customers all the time, 'If you had the choice, would you buy a hard drive-based system, if cost wasn't the only issue?' And not a single customer has ever said, 'Yeah, give me spinning rust,'" Denworth said. Denser NAND flash chip technology isn't the only innovation that could help to drive down costs of QLC flash. Roger Peene, a vice president in Micron's storage business unit, spotlighted the company's latest 176-layer 3D NAND that can also boost density and lower costs.


Instrumenting the Network for Successful AIOps

The highest quality network data is obtained by deploying devices such as network TAPs that mirror the raw network traffic. Many vendors offer physical and virtual versions of these to gather packet data from the data center as well as virtualized segments of the network. AWS and Google Cloud have both launched Virtual Private Cloud (VPC)traffic/packet mirroring features in the last year that allow users to duplicate traffic to and from their applications and forward it to cloud-native performance and security monitoring tools, so there are solid options for gathering packet data from cloud-hosted applications too.  The network taps let network monitoring tools view the raw data without impacting the actual data-plane. When dealing with high sensitivity applications such as ultra-low-latency trading, high quality network monitoring tools use timestamping with nanosecond accuracy to identify bursts with millisecond resolution which might cause packet drops that normal SNMP type counters can’t explain. This fidelity of data is relevant in other high quality applications such as real-time video decoding, gaming multicast servers, HPC and other critical IOT control systems. 


How to create an effective software architecture roadmap

The iteration model demonstrates how the architecture and related software systems will change and evolve on the way to a final goal. Each large iteration segment represents one milestone goal of an overall initiative, such as updating a particular application database or modernizing a set of legacy services. Then, each one of those segments contains a list of every project involved in meeting that milestone. For instance, a legacy service modernization iteration requires a review of the code, refactoring efforts, testing phases and deployment preparations. While architects may feel pressured to create a realistic schedule from the start of the iteration modeling phase, Richards said that it's not harmful to be aspirational, imaginative or even mildly unrealistic at this stage. Since this is still an unrefined plan, try to ignore limitations like cost and staffing, and focus on goals. ... Once an architect has an iteration model in place, the portfolio model injects reality into the roadmap. In this stage, the architect or software-side project lead analyzes the feasibility of the overall goal. They examine the initiative, the requirements for each planned iteration and the resources available for the individual projects within those iterations. 


How new-age data analytics is revolutionising the recruitment and hiring segment

There are innumerable advantages attached to opting for AI over an ordinary recruitment team. With the introduction of AI, companies can easily lower the costs involved in maintaining a recruitment team. The highly automated screening procedures select quality candidates that in turn will help the organization grow and retain better personnel – a factor that is otherwise overlooked in the conventional recruitment process. Employing AI and ML automates the whole recruitment process and helps eliminate the probability of human errors. Automation increases efficiency and improves the performance of other departments of the company. The traditional recruitment process tends to be very costly. Several teams are often needed for the purpose of hiring people in a company. But with the help of AI and ML, the unnecessary costs can be done away with and the various stages of hiring can all be conducted on a single dedicated platform. Additionally, if the company engages in a lot of contract work, then AI can be used for analysing the project plan and predicting the kinds, numbers, ratio and skills of workers that may be required for the purpose. The scope of AI and ML cannot be undermined by the capabilities of current systems.


6 experts share quantum computing predictions for 2021

"Next year is going to be when we start seeing what algorithms are going to show the most promise in this near term era. We have enough qubits, we have really high fidelities, and some capabilities to allow brilliant people to have a set of tools that they just haven't had access to," Uttley said. "Next year what we will see is the advancement into some areas that really start to show promise. Now you can double down instead of doing a scattershot approach. You can say, 'This is showing really high energy, let's put more resources and computational time against it.' Widespread use, where it's more integrated into the typical business process, that is probably a decade away. But it won't be that long before we find applications for which we're using quantum computers in the real world. That is in more the 18-24 month range." Uttley noted that the companies already using Honeywell's quantum computer are increasingly interested in spending more and more time with it. Companies working with chemicals and the material sciences have shown the most interest he said, adding that there are also healthcare applications that would show promise.


How Industrial IoT Security Can Catch Up With OT/IT Convergence

The bigger challenge, he says, is not in the silicon of servers and networking appliances but in the brains of security professionals. "The harder problem, I think, is the skills problem, which is that we have very different expertise existing within companies and in the wider security community, between people who are IT security experts and people who are OT security experts," Tsonchev says. "And it's very rare to find one individual where those skills converge." It's critical that companies looking to solve the converged security problem, whether in technology or technologists, to figure out what the technology and skills need to look like in order to support their business goals. And they need to recognize that the skills to protect both sides of the organization may not reside in a single person, Tsonchev says. "There's obviously a very deep cultural difference that comes from the nature of the environments characterized by the standard truism that confidentiality is the priority in IT and availability is the priority in OT," he explains. And that difference in mindset is natural – and to some extent essential – based on the requirements of the job. Where the two can begin to come together, Tsonchev says, is in the evolution away from a protection-based mindset to a way of looking at security based on risk and risk tolerance.


Dark Data: Goldmine or Minefield?

The issue here is that companies are still thinking in terms of sandboxes even when they are face-to-face with the entire beach. A system that considers analytics and governance flip sides of the same coin and incorporates them synergistically across all enterprise data is called for. Data that has been managed has the potential to capture the corpus of human knowledge within the organization, reflecting the human intent of a business. can offer substantial insight into employee work patterns, communication networks, subject matter expertise, and even organizational influencers and business processes. It also holds the potential for eliminating duplicative human effort, which can be an excellent tool to increase productivity and output. The results of this alone are a sure-fire way to boost productivity, spot common pain points that may not be effective to the workstream and can share insights to organizations where untapped potential may lay. Companies that have successfully bridged information management with analytics are answering fundamental business questions that have massive impact on revenue: Who are the key employees? ... With the increase in sophistication of analytics and its convergence with information governance, we will likely see a renaissance for this dark data that is presently largely a liability.


NCSC issues retail security alert ahead of Black Friday sales

“We want online shoppers to feel confident that they’re making the right choices, and following our tips will reduce the risk of giving an early gift to cyber criminals. If you spot a suspicious email, report it to us, or if you think you’ve fallen victim to a scam, report the details to Action Fraud and contact your bank as soon as you can.” Helen Dickinson, chief executive of the British Retail Consortium (BRC), added: “With more and more of us browsing and shopping online, retailers have invested in cutting-edge systems and expertise to protect their customers from cyber threats, and the BRC recently published a Cyber Resilience Toolkit for extra support to help to make the industry more secure. “However, we as customers also have a part to play and should follow the NCSC’s helpful tips for staying safe online.” The NCSC’s advice, which can be accessed online at its website, includes a number of tips, including being selective about where you shop, only providing necessary information, using secure and protected payments, securing online accounts, identifying potential phishing attempts, and how to deal with any problems. Carl Wearn, head of e-crime at Mimecast, commented: “Some of the main things to look out for include phishing emails and brand spoofing, as we are likely to see an increase in both.



Quote for the day:

“Focus on the journey, not the destination. Joy is found not in finishing an activity but in doing it.” -- Greg Anderson

Daily Tech Digest - November 23, 2020

Superhuman resources: How HR leaders have redefined their C-suite role

CHROs have to be able to envision how the strategy will be executed, the talents and skills required to accomplish the work, and the qualities needed from leaders to maximize the organization’s potential. Increasingly, that requires a nuanced understanding of how technology and humans will interact. “HR leaders sit at a crossroads because of the rise of artificial intelligence and can really predict whether a company is going to elevate their humans or eliminate their humans,” said Ellyn Shook, the CHRO of professional-services firm Accenture. “We’re starting to see new roles and capabilities in our own organization, and we’re seeing a whole new way of doing what we call work planning. The real value that can be unlocked lies in human beings and intelligent technologies working together.” ... CHROs must operate at a slightly higher altitude than their peers on the leadership team to ensure that the different parts of the business work well together. At their best, these leaders view the entire organization as a dynamic 3D model, and can see where different parts are meshing well and building on other parts, and also where there are gaps and seams. The key is to make the whole organization greater than the sum of its parts.


Three IT strategies for the new era of hybrid work

While the hyper-automation strategy will make life much easier for IT teams by delivering on greater automated experiences, there will always be issues that humans will have to resolve. Organisations must equip their IT teams with the tools to handle these issues remotely and securely to succeed in an increasingly complex environment. This begins with utilising AI and building on deep learning capabilities that provide critical information to IT teams in real time. Say an employee is unable to access restricted customer information from his home network to complete a sales order and needs to enable VPN access. With the right software platforms, the IT representative will be able to guide him remotely, to push the necessary VPN software to his device, configure the necessary access information and provision his access through automation scripts. IT would also be able to discover the model of the router used in his home network if required and assist in router settings if the employee assigns the rights and authorisation. IT can also assess its vulnerabilities and advise the employee accordingly. In the past, the work would have to completed in the office. With hybrid work environments, going back to the office may not even be an option.


Security pros fear prosecution under outdated UK laws

MP Ruth Edwards, who previously led on cyber security policy for techUK, said: “The Computer Misuse Act, though world-leading at the time of its introduction, was put on the statute book when 0.5% of the population used the internet. The digital world has changed beyond recognition, and this survey clearly shows that it is time for the Computer Misuse Act to adapt. “This year has been dominated by a public health emergency – the coronavirus pandemic, but it has also brought our reliance on cyber security into stark relief. We have seen attempts to hack vaccine trials, misinformation campaigns linking 5G to coronavirus, a huge array of coronavirus-related scams, an increase in remote working and more services move online. “Our reliance on safe and resilient digital technologies has never been greater. If ever there was going to be a time to prioritise the rapid modernisation of our cyber legislation, and review the Computer Misuse Act, it is now,” she said. The study is the first piece of work to quantify and analyse the views of the wider security community in the UK on this issue, and the campaigners say they have found substantial concerns and confusion about the CMA that are hampering the UK’s cyber defences.


An In-Depth Explanation of Code Complexity

By knowing how many independent paths there are through a piece of code, we know how many paths there are to test. I'm not advocating for 100% code coverage by the way—that's often a meaningless software metric. However, I always advocate for as high a level of code coverage as is both practical and possible. So, by knowing how many code paths there are, we can know how many paths we have to test. As a result, you have a measure of how many tests are required, at a minimum, to ensure that the code's covered. ... By reducing software complexity, we can develop with greater predictability. What I mean by that is we're better able to say—with confidence—how long a section of code takes to complete. By knowing this, we're better able to predict how long a release takes to ship. Based on this knowledge the business or organization is better able to set its goals and expectations, especially ones that are directly dependent on said software. When this happens, it’s easier to set realistic budgets, forecasts, and so on. Helping developers learn and grow is the final benefit of understanding why their code is considered complex. The tools I've used to assess complexity up until this point don't do that. What they do is provide an overall or granular complexity score.


How DevOps Teams Get Automation Backwards

Do you know what data (and metadata) needs to be backed up in order to successfully restore? Do you know how it will be stored, protected and monitored? Does your storage plan comply with relevant statutes, such as CCPA and GDPR. Do you regularly execute recovery scenarios, to test the integrity of your backups and the effectiveness of your restore process? At the heart of each of the above examples, the problem is due in large part to a top-down mandate, and a lack of buy-in from the affected teams. If the DevOps team has a sense of ownership over the new processes, then they will be much more eager to take on any challenges that arise. DevOps automation isn’t the solution to every problem. Automated UI tests are a great example of an automation solution that’s right for some types of organizations, but not for others. These sorts of tests, depending on frequency of UI changes, can be fragile and difficult to manage. Therefore, teams looking to adopt automated UI testing should first assess whether the anticipated benefits are worth the costs, and then ensure they have a plan for monitoring and maintaining the tests. Finally, beware of automating any DevOps process that you don’t use on a frequent basis.


Security by Design: Are We at a Tipping Point?

A big contributor for security flat-footedness is the traditional “trust but verify” approach, with bolt-on and reactive architectures (and solutions) that make security complex and expensive. Detecting a threat, assessing true vs. false alerts, responding to incidents holistically and doing it all in a timely fashion demands a sizeable security workforce; a strong, well-practiced playbook; and an agile security model. As we have learned over the years, this has been hard to achieve in practice—even harder for small or mid-size organizations and those with smaller budgets. Even though dwell time has reduced in the last few years, attackers routinely spend days, weeks or months in a breached environment before being detected. Regulations like the EU General Data Protection Regulation (GDPR) mandate reporting of notifiable data breaches within 72 hours, even as the median dwell time stands at 56 days, rising to 141 days for breaches not detected internally. Forrester analyst John Kindervag envisioned a new approach in 2009, called “zero trust.” It was founded on the belief that trust itself represents a vulnerability and security must be designed into business with a “never trust, always verify” model.


Distributors adding security depth

“With the rapidly changing security landscape, and home working seemingly here to stay, this partnership will help organisations alleviate these security pressures through one consolidated cloud solution. Together with Cloud Distribution, we will continue to expand our UK Partner network, ensuring we are offering robust cloud security solutions with our approach that takes user organisations beyond events and alerts, and into 24/7 automated attack prevention,” he said.  Other distributors have also taken steps to add depth to their portfolios. Last month, e92plus also moved to bolster its offerings with the signing of web security player Source Defense. The distie is responding to the threats around e-commerce and arming resellers with tools to help customers that have been forced to sell online during the pandemic. The shift online has come as threats have spiked and the criminal activity around online transactions has increased. “As more businesses look to transact business online, bad actors are exploiting client-side vulnerabilities that aren’t protected by traditional solutions like web application firewalls,” said Sam Murdoch, managing director at e92cloud.


3 Steps CISOs Can Take to Convey Strategy for Budget Presentations

CISOs recognize they cannot reduce their organization's cyber-risk to zero. Still, they can reduce it as much as possible by focusing on eliminating the most significant risks first. Therefore, when developing a budget, CISOs should consider a proactive risk-based approach that homes in on the biggest cyber-risks facing the business. This risk-based approach allows the CISO to quantify the risk across all areas of cyber weakness, and then prioritize where efforts are best expended. This ensures maximum impact from fixed budgets and teams. The fact is, the National Institute of Standards and Technology reports that an average breach can cost an organization upward of $4 million — more costly than the overall budget for many organizations. Consider a scenario where one CISO invests heavily in proactive measures, successfully avoiding a major breach, while another invests primarily in reactive measures and ends up cleaning up after a major breach. The benefit is that one (the proactively inclined CISO) ends up spending 10x less overall. ... While there is more awareness among top leadership and board members regarding the daunting challenges of cybersecurity, a board member's view of cybersecurity is primarily concerned with cybersecurity as a set of risk items, each with a certain likelihood of happening with some business impact.


Keeping data flowing could soon cost billions, business warned

As soon as the UK leaves the EU, it will also cease to be part of the GDPR-covered zone – and other mechanisms will be necessary to allow data to move between the two zones. The UK government, for its part, has already green-lighted the free flow of digital information from the UK to the EU, and has made it clear that it hopes the EU will return the favor. This would be called an adequacy agreement – a recognition that UK laws can adequately protect the personal data of EU citizens. But whether the UK will be granted adequacy is still up for debate, with just over one month to go. If no deal is achieved on data transfers, companies that rely on EU data will need to look at alternative solutions. These include standard contractual clauses (SCCs), for example, which are signed contracts between the sender and the receiver of personal data that are approved by an EU authority, and need to be drawn for each individual data transfer. SCCs are likely to be the go-to data transfer mechanism in the "overwhelming majority of cases," according to the report, and drafting the contracts for every single relevant data exchange will represent a costly bureaucratic and legal exercise for many firms. UCL's researchers estimated, for example, that the London-based university would have to amend and update over 5,000 contracts.


Even the world’s freest countries aren’t safe from internet censorship

Ensafi’s team found that censorship is increasing in 103 of the countries studied, including unexpected places like Norway, Japan, Italy, India, Israel and Poland. These countries, the team notes, are rated some of the world’s freest by Freedom House, a nonprofit that advocates for democracy and human rights. They were among nine countries where Censored Planet found significant, previously undetected censorship events between August 2018 and April 2020. They also found previously undetected events in Cameroon, Ecuador and Sudan. While the United States saw a small uptick in blocking, mostly driven by individual companies or internet service providers filtering content, the study did not uncover widespread censorship. However, Ensafi points out that the groundwork for that has been put in place here. “When the United States repealed net neutrality, they created an environment in which it would be easy, from a technical standpoint, for ISPs to interfere with or block internet traffic,” she said. “The architecture for greater censorship is already in place and we should all be concerned about heading down a slippery slope.”



Quote for the day:

"Beginnings are scary, endings are usually sad, but it's the middle that counts the most." -- Birdee Pruitt

Daily Tech Digest - November 22, 2020

It's time for banks to rethink how they secure customer information

To sum it up, banks and credit card companies really don't care to put too much effort into securing the accounts of customers. That's crazy, right?  The thing is, banks and credit card companies know they have a safety net to prevent them from crashing to the ground. That safety net is fraud insurance. When a customer of a bank has their account hacked or card number stolen, the institution is fairly confident that it will get its--I mean, the customer's--money back. But wait, the revelations go even deeper. These same institutions also admit (not to the public) that hackers simply have more resources than they do. Banks and credit card companies understand it's only a matter of time before a customer account is breached--these institutions deal with this daily. These companies also understand the futility of pouring too much investment into stopping hackers from doing their thing. After all, the second a bank invests millions into securing those accounts from ne'er-do-wells, the ne'er-do-wells will figure out how to get around the new security methods and protocols. From the bank's point of view, that's money wasted. It's that near-nihilistic point of view that causes customers no end of frustration, but it doesn't have to be that way.


The New Elements of Digital Transformation

Even as some companies are still implementing traditional automation approaches such as enterprise resource planning, manufacturing execution, and product life cycle management systems, other companies are moving beyond them to digitally reinvent operations. Amazon’s distribution centers deliver inventory to workers rather than sending workers to collect inventory. Rio Tinto, an Australian mining company, uses autonomous trucks, trains, and drilling machinery so that it can shift workers to less dangerous tasks, leading to higher productivity and better safety. In rethinking core process automation, advanced technologies are useful but not prerequisites. Asian Paints transformed itself from a maker of coatings in 13 regions in India to a provider of coatings, painting services, design services, and home renovations in 17 countries by first establishing a common core of digitized processes under an ERP system. This provided a foundation to build upon and a clean source of data to generate insights. Later, the company incorporated machine learning, robotics, augmented reality, and other technologies to digitally enable its expansion.


AI startup Graphcore says most of the world won't train AI, just distill it

Graphcore is known for building both custom chips to power AI, known as accelerators, and also full computer systems to house those chips, with specialized software. In Knowles's conception of the pecking order of deep learning, the handful of entities that can afford "thousands of yotta-FLOPS" of computing power -- the number ten raised to the 24th power -- are the ones that will build and train trillion-parameter neural network models that represent "universal" models of human knowledge. He offered the example of huge models that can encompass all of human languages, rather like OpenAI's GPT-3 natural language processing neural network. "There won't be many of those" kinds of entities, Knowles predicted. Companies in the market for AI computing equipment are already talking about projects underway to use one trillion parameters in neural networks. By contrast, the second order of entities, the ones that distill the trillion-parameter models, will require far less computing power to re-train the universal models to something specific to a domain. And the third entities, of course, even less power. Knowles was speaking to the audience of SC20, a supercomputing conference which takes place in a different city each year, but this year is being held as a virtual event given the COVID-19 pandemic.


5 Reasons for the Speedy Adoption of Blockchain Technology

Blockchain technology can only handle three to seven transactions per second, while the legacy transaction processing system is able to process tens of thousands of them every second. This led many observers to be unsure of the potential of blockchain as a viable option for large-scale applications. However, recent developments have resulted in promising way to close this performance gap and a new consensus mechanism is being developed. This mechanism is enabling participants (some of who are unknown to each other) to trust the validity of the transactions. While the performance may be sluggish and a lot of computational resources may be spent in the mechanism involving blockchain, the better performance is the key that is popularizing the use of the blockchain technology. Latest designs are aiming to reduce the time and energy intensive mining required to validate every transaction. Various blockchain-based applications are able to choose between performance, functionality, and security to suit what is most appropriate for the application. This consensus model is being especially appreciated in industries like auto-leasing, insurance, healthcare, supply chain management, trading, and more.


How next gen Internal Audit can play strategic role in risk management post-pandemic

The purpose of a business continuity plan is to ensure that the business is ready to survive a critical incident. It permits an instantaneous response to the crisis so as to shorten recovery time and mitigate the impact. This pandemic has conferred an unprecedented “critical incident” for the globe. With unknown reach and period, worldwide implications, and no base for accurate projections, we are very much into unchartered territories. Many organizations used to develop a disaster recovery plan and business continuity procedure that was rarely put to the test in a real crisis situation. With the arrival of newer risks e.g. cyber-attacks, data transfer confidentiality issues struggle with maintaining supply levels, workforce management, physical losses, operational disruptions, change of marketing platforms, increased volatility and interdependency of the global economy, etc. the traditionally accepted Business Continuity & Crisis Management Models are getting continuously & constructively challenged rapidly. Therefore, organizations need adequate planning resulting in immediate response, better decision-making, maximum recovery, effective communications, and sound contingency plans for various scenarios that may suddenly arise.


How to Build a Production Grade Workflow with SQL Modelling

A constructor creates a test query where a common table expression (CTE) represents each input mock data model, and any references to production models (identified using dbt’s ‘ref’ macro) are replaced by references to the corresponding CTE. Once you execute a query, you can compare the output to an expected result. In addition to an equality assertion, we extended our framework to support all expectations from the open-source Great Expectations library to provide more granular assertions and error messaging. The main downside to this framework is that it requires a roundtrip to the query engine to construct the test data model given a set of inputs. Even though the query itself is lightweight and processes only a handful of rows, these roundtrips to the engine add up. It becomes costly to run an entire test suite on each local or CI run. To solve this, we introduced tooling both in development and CI to run the minimal set of tests that could potentially break given the change. This was straightforward to implement with accuracy because of dbt’s lineage tracking support; we simply had to find all downstream models (direct and indirect) for each changed model and run their tests.


Google Services Weaponized to Bypass Security in Phishing, BEC Campaigns

For its part, Google stresses the company is taking every measure to keep malicious actors off their platforms. “We are deeply committed to protecting our users from phishing abuse across our services, and are continuously working on additional measures to block these types of attacks as methods evolve,” a Google spokesperson told Threatpost by email. The statement added that Google’s abuse policy prohibits phishing and emphasized that the company is aggressive in combating abuse. “We use proactive measures to prevent this abuse and users can report abuse on our platforms,” the statement said. “Google has strong measures in place to detect and block phishing abuse on our services.” Sambamoorthy told Threatpost that the security responsibility does not rest on Google alone and that organizations should not rely solely on Google’s security protections for their sensitive data. “Google faces a fundamental dilemma because what makes their services free and easy to use also lowers the bar for cybercriminals to build and launch effective phishing attacks,” he said. “It’s important to remember that Google is not an email security company — their primary responsibility is to deliver a functioning, performant email service.”


Democratize Data to Empower your Organization and Unleash More Value

Organizations, unsure whether they can trust their data, limit access, instead of empowering the whole enterprise to achieve new insights for practical uses. To drive new value—such as expanded customer marketing and increasing operational efficiencies—democratizing data demands building out a trusted, governed data marketplace, enabling mastered and curated data to drive your innovations that leapfrog the competition. To do this, trust assurance has become the critical enabler. But how to accomplish trust assurance? Trust Assurance Helps You Accelerate Reliable Results So, what is trust assurance, and how can data governance help accelerate it? If an organization is to convert data insights into value that drives new revenue, improves customer experience, and enables more efficient operations, the data needs controls to help ensure it’s both qualitative for reliable results as well as protected for appropriate, and compliant, use. According to IDC, we’re seeing a 61 percent compound annual growth rate (CAGR) in worldwide data at this moment—a rate of increase that will result in 175 zettabytes of data worldwide by 2025. 


DDoS mitigation strategies needed to maintain availability during pandemic

According to Graham-Cumming, enterprises should start the process of implementing mitigating measures by conducting thorough due diligence of their entire digital estate and its associated infrastructure, because that is what attackers are doing. “The reality is, particularly for the ransomware folks, these people are figuring out what in your organisation is worth attacking,” he says.“It might not be the front door, it might not be the website of the company as that might not be worth it – it might be a critical link to a datacentre where you’ve got a critical application running, so we see people doing reconnaissance to figure out what the best thing to attack is. “Do a survey of what you’ve got exposed to the internet, and that will give you a sense of where attackers might go. Then look at what really needs to be exposed to the internet and, if it does, there are services out there that can help.” This is backed up by Goulding at Nominet, who says that while most reasonably mature companies will have already considered DDoS mitigation, those that have not can start by identifying which assets they need to maintain availability for and where they are located.


Empathy: The glue we need to fix a fractured world

Our most difficult moments force us to contend with our vulnerability and our mortality, and we realize how much we need each other. We’ve seen this during the pandemic and the continued struggle for racial justice. There has been an enormous amount of suffering but also an intense desire to come together, and a lot of mutual aid and support. This painful moment has produced a lot of progress and clarity around our values. Yet modern life, especially in these pandemic times, makes it harder than ever to connect with each other, and this disconnectedness can erode our empathy. But we can fight back. We can work to empathize more effectively. The pandemic, the economic collapse associated with it, and the fight for racial justice have increased all sorts of feelings, including empathy, anger, intolerance, fear, and stress. A big question for the next two to five years is which tide will prevail. ... Another problem is that there’s tribalism within organizations, especially larger organizations and those that are trying to put different groups of people with different goals under a single tent. For instance, I’ve worked with companies that include both scientists and people who are trying to market the scientists’ work. 



Quote for the day:

"Superlative leaders are fully equipped to deliver in destiny; they locate eternally assigned destines." -- Anyaele Sam Chiyson

Daily Tech Digest - November 21, 2020

How phishing attacks are exploiting Google's own tools and services

Armorblox's co-founder and head of engineering, Arjun Sambamoorthy, explains that Google is a ripe target for exploitation due to the free and democratized nature of many of its services. Adopted by so many legitimate users, Google's open APIs, extensible integrations, and developer-friendly tools have also been co-opted by cybercriminals looking to defraud organizations and individuals. Specifically, attackers are using Google's own services to sneak past binary security filters that look for traffic based on keywords or URLs. ... cybercriminals spoof an organization's security administration team with an email telling the recipient that they've failed to receive some vital messages because of a storage quota issue. A link in the email asks the user to verify their information in order to resume email delivery. The link in the email leads to a phony login page hosted on Firebase, Google's mobile platform for creating apps, hosting files and images, and serving up user-generated content. This link goes through one redirection before landing on the Firebase page, confusing any security product that tries to follow the URL to its final location. As it's hosted by Google, the parent URL of the page will escape the notice of most security filters.


Women in Data: How Leaders Are Driving Success

Next-gen analytics have helped to shift perception and enable the business to accelerate the use of data, according to panelist Barb Latulippe, Sr. Director Enterprise Data at Edward Life Sciences, who emphasized the trend toward self-service in enterprise data management. The days of the business going to IT are gone—a data marketplace provides a better user experience. Coupled with an effort to increase data literacy throughout the enterprise, such data democratization empowers users to access the data they need themselves, thanks to a common data language. This trend was echoed by panelist Katie Meyers, senior vice president at Charles Schwab responsible for data sales and service technologies. A data leader for 25 years, Katie focused on the role cloud plays in enabling new data-driven capabilities. Katie emphasized that we’re living in a world where data grows faster than our ability to manage the infrastructure. By activating data science and artificial intelligence (AI), Charles Schwab can leverage automation and machine learning to enable both the technical and business sides of the organization to more effectively access and use data. 


Developer experience: an essential aspect of enterprise architecture

Code that provides the structure and resources to allow a developer to meet their objectives with a high degree of comfort and efficiency is indicative of a good developer experience. Code that is hard to understand, hard to use, fails to meet expectations and creates frustration for the developer is typical of a bad developer experience. Technology that offers a good developer experience allows a programmer to get up and running quickly with minimal frustration. A bad developer experience—one that is a neverending battle trying to figure out what the code is supposed to do and then actually getting it to work—costs time, money, and, in some cases, can increase developer turnover. When working with a company’s code is torturous enough, a talented developer who has the skills to work anywhere will do take one of their many other opportunities and leave. There is only so much friction users will tolerate. While providing a good developer experience is known to be essential as one gets closer to the user of a given software, many times, it gets overlooked at the architectural design level. However, this oversight is changing. Given the enormous demand for more software at faster rates of delivery, architects are paying attention.


ISP Security: Do We Expect Too Much?

"The typical Internet service provider is primarily focused on delivering reliable, predictable bandwidth to their customers," Crisler says. "They value connectivity and reliability above everything else. As such, if they need to make a trade-off decision between security and uptime, they will focus on uptime." To be fair, demand for speed and reliable connections was crushing many home ISPs in the early days of the pandemic. For some, it remains a serious strain. "In the early weeks of the pandemic, when people started using their residential connections at once, ISPs were faced with major outages as bandwidth oversubscription and increased botnet traffic created serious bottlenecks for people working at home," says Bogdan Botezatu, director of threat research and reporting at Bitdefender. ISPs' often aging and inadequately protected home hardware presents many security vulnerabilities as well. "Many home users rent network hardware from their ISP. These devices are exposed directly to the Internet but often lack basic security controls. For example, they rarely if ever receive updates and often leave services like Telnet open," says Art Sturdevant, VP of technical operations at Internet device search engine Censys. "And on devices that can be configured using a Web page, we often see self-signed certificates, a lack of TLS for login pages, and default credentials in use."


Can private data as a service unlock government data sharing?

Data as a service (DaaS), a scalable model where many analysts can access a shared data resource, is commonplace. However, privacy assurance about that data has not kept pace. Data breaches occur by the thousands each year, and insider threats to privacy are commonplace. De-identification of data can often be reversed and has little in the way of a principled security model. Data synthesis techniques can only model correlations across data attributes for unrealistically low-dimensional schemas. What is required to address the unique data privacy challenges that government agencies face is a privacy-focused service that protects data while retaining its utility to analysts: private data as a service (PDaaS). PDaaS can sit atop DaaS to protect subject privacy while retaining data utility to analysts. Some of the most compelling work to advance PDaaS can be found with projects funded by the Defense Advanced Research Projects Agency’s Brandeis Program, ... According to DARPA, “[t]he vision of the Brandeis program is to break the tension between: (a) maintaining privacy and (b) being able to tap into the huge value of data. Rather than having to balance between them, Brandeis aims to build a third option – enabling safe and predictable sharing of data in which privacy is preserved.”


How to Create High-Impact Development Teams

Today’s high-growth, high-scale organizations must have well-rounded tech teams in place -- teams that are engineered for success and longevity. However, the process of hiring for, training and building those teams requires careful planning. Tech leaders must ask themselves a series of questions throughout the process: Are we solving the right problem? Do we have the right people to solve these problems? Are we coaching and empowering our people to solve all aspects of the problem? Are we solving the problem the right way? Are we rewarding excellence? Is 1+1 at least adding up to 2 if not 3? ... When thinking of problems to solve for the customers -- don’t constrain yourself by the current resources. A poor path is to first think of solutions based on resource limitations and then find the problems that fit those solutions. An even worse path is to lose track of the problems and simply start implementing solutions because “someone” asked for it. Instead, insist on understanding the actual problems/pain points. Development teams who understand the problems often come back with alternate, and better, solutions than the initial proposed ones. 


Apstra arms SONiC support for enterprise network battles

“Apstra wants organizations to reliably deploy and operate SONiC with simplicity, which is achieved through validated automation...Apstra wants to abstract the switch OS complexity to present a consistent operational model across all switch OS options, including SONiC,” Zilakakis said. “Apstra wants to provide organizations with another enterprise switching solution to enable flexibility when making architecture and procurement decisions.” The company’s core Apstra Operating System (AOS), which supports SONIC-based network environments, was built from the ground up to support IBN. Once running it keeps a real-time repository of configuration, telemetry and validation information to constantly ensure the network is doing what the customer wants it to do. AOS includes automation features to provide consistent network and security policies for workloads across physical and virtual infrastructures. It also includes intent-based analytics to perform regular network checks to safeguard configurations. AOS is hardware agnostic and integrated to work with products from Cisco, Arista, Dell, Juniper, Microsoft and Nvidia/Cumulus.


New EU laws could erase its legacy of world-leading data protection

As the European Union finalises new digital-era laws, its legacy of world-leading privacy and data protection is at stake. Starting next week, the European Commission will kick off the introduction of landmark legislative proposals on data governance, digital market competition, and artificial intelligence. The discussions happening now and over the next few months have implications for the future of the General Data Protection Regulation and the rights this flagship law protects. With Google already (predictably) meddling in the debate, it is imperative that regulators understand what the pitfalls are and how to avoid them. ... The first new legislation out of the gate will be the Data Governance Act, which the European Commission is set to publish on November 24. According to Commissioner Thierry Breton, the new Data Strategy aims to ensure the EU “wins the battle of non-personal data” after losing the “race on personal data”. We strongly object to that narrative. While countries like the US have fostered the growth of privacy-invasive data harvesting business models that have led to repeated data breaches and scandals such as Cambridge Analytica, the EU stood against the tide, adopting strong data protection rules that put people before profits.


The journey to modern data management is paved with an inclusive edge-to-cloud Data Fabric

We want everything to be faster, and that’s what this Data Fabric approach gets for you. In the past, we’ve seen edge solutions deployed, but you weren’t processing a whole lot at the edge. You were pushing along all the data back to a central, core location -- and then doing something with that data. But we don’t have the time to do that anymore. Unless you can change the laws of physics -- last time I checked, they haven’t done that yet -- we’re bound by the speed of light for these networks. And so we need to keep as much data and systems as we can out locally at the edge. Yet we need to still take some of that information back to one central location so we can understand what’s happening across all the different locations. We still want to make the rearview reporting better globally for our business, as well as allow for more global model management. ... Typically, we see a lot of data silos still out there today with customers – and they’re getting worse. By worse, I mean they’re now all over the place between multiple cloud providers. I may use some of these cloud storage bucket systems from cloud vendor A, but I may use somebody else’s SQL databases from cloud vendor B, and those may end up having their own access methodologies and their own software development kits (SDKs).


Rebooting AI: Deep learning, meet knowledge graphs

"Most of the world's knowledge is imperfect in some way or another. But there's an enormous amount of knowledge that, say, a bright 10-year-old can just pick up for free, and we should have RDF be able to do that. Some examples are, first of all, Wikipedia, which says so much about how the world works. And if you have the kind of brain that a human does, you can read it and learn a lot from it. If you're a deep learning system, you can't get anything out of that at all, or hardly anything. Wikipedia is the stuff that's on the front of the house. On the back of the house are things like the semantic web that label web pages for other machines to use. There's all kinds of knowledge there, too. It's also being left on the floor by current approaches. The kinds of computers that we are dreaming of that can help us to, for example, put together medical literature or develop new technologies are going to have to be able to read that stuff. We're going to have to get to AI systems that can use the collective human knowledge that's expressed in language form and not just as a spreadsheet in order to really advance, in order to make the most sophisticated systems."



Quote for the day:

"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley

Daily Tech Digest - November 20, 2020

How software-defined storage (SDS) enables continuity of operations

Creating a new layer completely based on software in your infrastructure stack means that costs for hardware can be minimised, while boosting multi-cloud strategies. “Traditionally, regardless of how complex and well maintained your data centre, its fixed position was a weakness easily exploited or corrupted by theft, disaster or power-related issues,” said Ben Griffin, sales director at Computer Disposals Ltd. “It’s for precisely this reason that SDS is a reliable partner – offering a means of continuation should the worst happen. “Decoupling storage from hardware, as is the case with SDS, brings a huge range of benefits for the day-to-day duties of IT personnel. And, from a broader company-wide perspective, it enables simpler continuity through challenging periods by relying less on owned hardware and more on flexible, accessible and affordable multi-cloud environments. “One of the great attributes of SDS is scalability, and this, in turn, is often one of the principal means of business continuity. Should a business need to downsize in challenging times, with a view to reinvesting in personnel later down the line, SDS provides this ability with none of the usual challenges associated with managing a physical data centre.”


A perspective on security threats and trends, from inception to impact

The abuse of legitimate tools enables adversaries to stay under the radar while they move around the network until they are ready to launch the main part of the attack, such as ransomware. For nation-state-sponsored attackers, there is the additional benefit that using common tools makes attribution harder. In 2020, Sophos reported on the wide range of standard attack tools now being used by adversaries. “The abuse of everyday tools and techniques to disguise an active attack featured prominently in Sophos’ review of the threat landscape during 2020. This technique challenges traditional security approaches because the appearance of known tools doesn’t automatically trigger a red flag. This is where the rapidly growing field of human-led threat hunting and managed threat response really comes into its own,” said Wisniewski. “Human experts know the subtle anomalies and traces to look for, such as a legitimate tool being used at the wrong time or in the wrong place. To trained threat hunters or IT managers using endpoint detection and response (EDR) features, these signs are valuable tripwires that can alert security teams to a potential intruder and an attack underway.”


Evolution of Emotet: From Banking Trojan to Malware Distributor

Ever since its discovery in 2014—when Emotet was a standard credential stealer and banking Trojan, the malware has evolved into a modular, polymorphic platform for distributing other kinds of computer viruses. Being constantly under development, Emotet updates itself regularly to improve stealthiness, persistence, and add new spying capabilities. This notorious Trojan is one of the most frequently malicious programs found in the wild. Usually, it is a part of a phishing attack, email spam that infects PCs with malware and spreads among other computers in the network. ... In recent versions, a significant change in the strategy has happened. Emotet has turned into polymorphic malware, downloading other malicious programs to the infected computer and the whole network as well. It steals data, adapts to various detection systems, rents the infected hosts to other cybercriminals as a Malware-as-a-Service model. Since Emotet uses stolen emails to gain victims' trust, spam has consistently remained the primary delivery method for Emotet—making it convincing, highly successful, and dangerous.


Decentralised Development: Common Pitfalls and how VSM can Avoid Them

A value stream mapping exercise should involve all of the teams that would ever collaborate on a release. Bringing everyone together ensures that all parts of the process are being recognised and tracked on the map. Ideally, there should be two sessions, the first focused on building a map of the current value stream. This is essentially a list of every single action that is completed from start to finish in the delivery pipeline. It includes all of the governance tests that need to be conducted, how all of the individual actions relate to each other, and which actions cannot be completed until something else has been done first. It’s important to be very thorough during this process, and make sure that every action is accounted for. Once the entire map is complete, you are left with an accurate picture of everything that needs to be done as part of the release pipeline. Not surprisingly, most companies don’t have this visibility today, but it will be invaluable moving forward. For product managers in particular, having a concrete outline of all of the processes that are occurring gives them a clear sense of all the moving parts.


Now Available: Red Hat Ansible Automation Platform 1.2

The Ansible project is a remarkable open source project with hundreds of thousands of users encompassing a large community. Red Hat extends this community and open source developer model to innovate, experiment and incorporate feedback to satisfy our customer challenges and use cases. Red Hat Ansible Automation Platform transforms Ansible and many related open source projects into an enterprise grade, multi-organizational automation platform for mission-critical workloads. In modern IT infrastructure, automation is no longer a nice-to-have; it’s often now a requirement to run, operate and scale how everything is managed: including network, security, Linux, Windows, cloud and more. Ansible Automation Platform includes a RESTful API for seamless integration with existing IT tools and processes. The platform also includes a web UI with a push-button intuitive interface for novice users to consume and operate automation with safeguards. This includes Role Based Access Controls (RBAC) to help control who can automate what job on which equipment, as well as enterprise integrations with TACACS+, RADIUS, and Active Directory. Ansible Automation Platform also enables advanced workflows. 


How Cyberattacks Work

Cyberattacks have been increasing in number and complexity over the past several years, but given the prevalence of events, and signals that greater attacks could be on the horizon, it’s a good time to examine what goes into a cyberattack. Breaches can occur when a bad actor hacks into a corporate network to steal private data. They also occur when information is seized out of cloud-based infrastructure. Many people think that security breaches only happen to sizable corporations, but Verizon found that 43% of breaches affect small businesses. In fact, this was the largest cohort measured. And the damage such businesses experience is considerable — 60% go out of business within six months of an attack. Small businesses make appealing targets because their security is usually not as advanced as that encountered within large enterprises. Systems may be outdated, and bugs often go unpatched for lengthy periods. SMBs also tend to have fewer resources available to manage an attack, limiting their ability to detect, respond, and recover. Additionally, small businesses can serve as testing grounds for hackers to test their nefarious methods before releasing an attack on another, bigger fish.


Time to Rethink Your Enterprise Software Strategy?

The response to process and software changes depends on where you are in your digital transformation journey. Early adopters of digital transformation could be hailed as genius in hindsight. Those still in their journey are speeding up to make that last push to completion in case another round of pandemic, locusts, or other plagues circle the globe. Those followers and laggards who treated digital transformation as if it were a passing trend may find themselves the proverbial coyote riding their “Acme Digital Transformation Rocket” off the COVID cliff. But, thanks to technology (NOT from Acme), there is hope. As organizations, including major software vendors, moved to Agile frameworks to deliver software and implementations more quickly, a convergence of technologies and services fell into place. Cloud services have been around for a while, but the incredible push to move infrastructure to cloud platforms and software as a service (SaaS) has been nothing short of amazing. With the latest release of rapid deployment low-code/no-code tools from Salesforce, Microsoft, Amazon, and Google/Alphabet, the toolsets are now designed for two speeds: fast and faster. Changing the software and changing the processes are related, but two different paths.


The Fintech Future: Accelerating the AI & ML Journey

Fintechs across the world are dealing with the effects of Covid-19 and face an uphill challenge in containing the impact of it on the financial system and broader economy. With rising unemployment and stagnated economies, individuals and companies are struggling with debt, while the world in general is awash in credit risk. This has pushed operational resilience to the top of fintech CXOs’ agendas, requiring them to focus on systemic risks while continuing to deliver innovative digital services to customers. To make matters worse, criminals are exploiting vulnerabilities imposed by the shift to remote operations post-Covid-19, increasing the risk of fraud and cybercrime. For fintechs, building and maintaining robust defences has, therefore, become a critical priority. Organisations around the globe are forging new models to combat financial crime in collaboration with governments, regulators, and even other fintechs. The technological advances in data analytics, AI and machine learning (ML) have been driving fintechs’ response to the crisis, accelerating the automation journey many had already embarked on. Until recently, fintechs have used traditional methods of data analysis for various applications, including the detection of fraud and predicting defaults, that require complex and time-consuming investigations.


Managing Metadata: An Examination of Successful Approaches

Metadata turns critical ‘data’ into critical ‘information.’ Critical information is data + metadata that feeds Key Performance Indicators (KPIs). He recommends asking: “What will change with a better understanding of your data?” Getting people on board involves understanding how metadata can solve problems for end users while meeting company objectives. “We want to be in a position to say, ‘I do this and your life gets better.’” To have a greater impact, he said, avoid ‘data speak’ and engage with language that the business understands. For example, the business won’t ask for a ‘glossary.’ Instead they will ask for ‘a single view of the customer, integrated and aligned across business units.’ An added benefit of using accessible language is being perceived as helpful, rather than being seen as adding to the workload. ... When documenting the Information Architecture, Adams suggests focusing on how the information flows around the architecture of the organization, rather than focusing on specific systems. Start with the type of information and where it resides and denote broad applications and system boundaries. Include data shared with people outside the organization.


Meet the Microsoft Pluton processor – The security chip designed for the future of Windows PCs

The Microsoft Pluton design technology incorporates all of the learnings from delivering hardware root-of-trust-enabled devices to hundreds of millions of PCs. The Pluton design was introduced as part of the integrated hardware and OS security capabilities in the Xbox One console released in 2013 by Microsoft in partnership with AMD and also within Azure Sphere. The introduction of Microsoft’s IP technology directly into the CPU silicon helped guard against physical attacks, prevent the discovery of keys, and provide the ability to recover from software bugs. With the effectiveness of the initial Pluton design we’ve learned a lot about how to use hardware to mitigate a range of physical attacks. Now, we are taking what we learned from this to deliver on a chip-to-cloud security vision to bring even more security innovation to the future of Windows PCs. Azure Sphere leveraged a similar security approach to become the first IoT product to meet the “Seven properties of highly secure devices.” The shared Pluton root-of-trust technology will maximize the health and security of the entire Windows PC ecosystem by leveraging the security expertise and technologies from the companies involved.



Quote for the day:

"If we were a bit more tolerant of each other's weaknesses we'd be less alone." -- Juliette Binoche