Daily Tech Digest - October 20, 2020

Five ways the pandemic has changed compliance—perhaps permanently

There is a strong acknowledgment that compliance will be forced to rely heavily on technology to ensure an adequate level of visibility to emerging issues. We need to strategically leverage technology and efficient systems to monitor risk. This is causing some speculation that a greater skills overlap will be required of CCO and CISO roles. This, however, also raises privacy concerns. Taylor believes the remote environment will lead to “exponential growth” in employee surveillance and that compliance officers will need to tread carefully given that this can undermine ethical culture: “Just because the tools exist, doesn’t mean you have to use them,” she says. Compliance veteran and advisor Keith Darcy predicts dynamic and continuous risk assessment—one that considers “the rapidly deteriorating and changing business conditions. ‘One-and-done’ assessments are completely inadequate.” Some predict that investigation interviews conducted on video conference and remote auditing will become the norm. Others are concerned that policies cannot be monitored or enforced without being in the office together; that compliance will be “out of sight, out of mind” to some degree. Communication must be a top priority for compliance, as the reduction of informal contacts with stakeholders and employees makes effectiveness more challenging.


In the Search of Code Quality

In general functional and statically typed languages were less error-prone than dynamically typed, scripting, or procedural languages. Interestingly defect types correlated stronger with language than the number of defects. In general, the results were not surprising, confirming what the majority of the community believed to be true. The study got popularity and was extensively cited. There is one caveat, the results were statistical and interpreting statistical results one must be careful. Statistical significance does not always entail practical significance and, as the authors rightfully warn, correlation is not causation. The results of the study do not imply (although many readers have interpreted it in such a way) that if you change C to Haskell you will have fewer bugs in the code. Anyway, the paper at least provided data-backed arguments. But that’s not the end of the story. As one of the cornerstones of the scientific method is replication, a team of researchers tried to replicate the study from 2016. The result, after correcting some methodological shortcomings found in the original paper, was published in 2019 in the paper On the Impact of Programming Languages on Code Quality A Reproduction Study.


3 unexpected predictions for cloud computing next year

With more than 90 percent of enterprises using multicloud, there is a need for intercloud orchestration. The capability to bind resources together in a larger process that spans public cloud providers is vital. Invoking application and database APIs that span clouds in sequence can solve a specific business problem; for example, inventory reorder points based on a common process between two systems that exist in different clouds. Emerging technology has attempted to fill this gap, such as cloud management platforms and cloud service brokers. However, they have fallen short. They only provide resource management between cloud brands, typically not addressing the larger intercloud resource and process binding. This a gap that innovative startups are moving to fill. Moreover, if the public cloud providers want to truly protect their market share, they may want to address this problem as well. Second: cloudops automation with prebuilt corrective behaviors. Self-healing is a feature where a tool can take automated corrective action to restore systems to operation. However, you have to build these behaviors yourself, including automations, or wait as the tool learns over time. We’ve all seen the growth of AIops, and the future is that these behaviors will come prebuilt with pre-existing knowledge that can operate distributed or centralized. 


How Organizations Can Build Analytics Agility

Data and analytics leaders must frame investments in the current context and prioritize data investments wisely by taking a complete view of what is happening to the business across a number of functions. For example, customers bank very differently in a time of crisis, and this requires banks to change how they operate in order to accommodate them. The COVID-19 pandemic forced banks to take another look at the multiple channels their customers traverse — branches, mobile, online banking, ATMs — and how their comfort levels with each shifted. How customers bank, and what journeys they engage in at what times and in what sequence, are all highly relevant to helping them achieve their financial goals. The rapid collection and analysis of data from across channels, paired with key economic factors, provided context that allowed banks to better serve customers in the moment. New and different sources of information — be it transaction-level data, payment behaviors, or real-time credit bureau information — can help ensure that customer credit is protected and that fraudulent activity is kept at bay. Making the business case for data investments suddenly makes sense as business leaders live through data gap implications in real time.


Cisco targets WAN edge with new router family

The platform makes it possible to create a fully software-defined branch, including connectivity, edge compute, and storage. Compute and switching capabilities can be added via UCS-E Series blades and UADP-powered switch modules. Application hosting is supported using containers running on the Catalyst 8300’s multi-core, high-performance x86 processor, according JL Valente, vice president of product management for Cisco’s Intent-Based Networking Group in a blog about the new gear. Cisco said the Catalyst 8000V Edge Software is a virtual routing platform that can run on any x86 platform, or on Cisco’s Enterprise Network Compute System or appliance in a private or public cloud. Depending on what features customers need. the new family supports Cisco SD-WAN software, including Umbrella security software and Cisco Cloud On-Ramp that lets customers tie distributed cloud applications from AWS, Microsoft and Google back to a branch office or private data center. The platforms produce telemetry that can be used in Cisco vAnalytics to provide insights into device and fabric performance as well as spot anomalies in the network and perform capacity planning.


2021 Will Be the Year of Catch-Up

With renewed focus on technology to bring about the changes needed, it’s crucial that organizations recognize that infrastructure must be secure. Our new office environment is anywhere we can find a connection to Wi-Fi, and that opens many more doors to cyber-attacks. The rapid shift in business operations significantly impacted the cyberthreat landscape – as companies fast-tracked the migration of digital assets to the cloud, they also inadvertently increased the attack surfaces from which hackers can try to gain access to their data and applications. C-suite executives are moving quickly with network plans to support exploding customer and supplier demand for contactless interactions and the unplanned need to connect a remote workforce, yet they are also aware that they are not fully prepared to adequately protect their organizations from unknown threats. The situation is further compounded by the cloud shared responsibility model, which says that cloud service providers are responsible for the security of the cloud while customers are responsible for securing the data they put into the cloud. Many organizations rely on their third-party providers to certify security management services, but the decentralized nature of this model can add complexity to how applications and computing resources are secured.


BA breach penalty sets new GDPR precedents

The reduction in the fine also adds fuel to the ongoing class action lawsuit against BA, said Long at Lewis Silkin. “Completely separate from the £20m fine by the ICO, British Airways customers, and indeed any staff impacted, are likely to be entitled to compensation for any loss they have suffered, any distress and inconvenience they have suffered, and indeed possibly any loss of control over their data they have suffered,” she said. “This might only be £500 a pop but if only 20,000 people claim that is another potential £10m hit, and if 100,000 then £50m. So whilst a win today, this is very much only round one for BA.” Darren Wray, co-founder and CTO of privacy specialist Guardum, said it was easy to imagine many of the breach’s actual victims would be put out by the ICO’s decision. “Many will feel their data and their fight to recover any financial losses resulting from the airline’s inability to keep their data safe has been somewhat marginalised,” he said. “This can only strengthen the case of the group pursuing a class action case against BA. The GDPR and the UK DPA 2018 do after all allow for such action and if the regulator isn’t seen as enforcing the rules strongly enough, it leaves those whose data was lost few alternative options,” said Wray.


Is Artificial Intelligence Closer to Common Sense?

COMET relies on surface patterns in its training data rather than understanding concepts. The key idea would be to supply surface patterns with more information outside of language such as visual perceptions or embodied sensations. First person representations, not language, would be the basis for common sense. Ellie Pavlick is attempting to teach intelligent agents common sense by having them interact with virtual reality. Pavlick notes that common sense would still exist even without the ability to talk to other people. Presumably, humans were using common sense to understand the world before they were communicating. The idea is to teach intelligent agents to interact with the world the way a child does. Instead of associating the idea of eating with a textual description, an intelligent agent would be told, “We are now going to eat,” and then it would see the associated actions such as, gathering food from the refrigerator, preparing the meal, and then see its consumption. Concept and action would be associated with each other. It could then generate similar words when seeing similar actions. Nazneen Rajani is investigating whether language models can reason using basic physics. For example, if a ball is inside a jar, and the jar is tipped over, the ball will fall out.


Russia planned cyber-attack on Tokyo Olympics, says UK

The UK is the first government to confirm details of the breadth of a previously reported Russian attempt to disrupt the 2018 winter Olympics and Paralympics in Pyeongchang, South Korea. It declared with what it described as 95% confidence that the disruption of both the winter and summer Olympics was carried out remotely by the GRU unit 74455. In Pyeongchang, according to the UK, the GRU’s cyber-unit attempted to disguise itself as North Korean and Chinese hackers when it targeted the opening ceremony of the 2018 winter Games, crashing the website so spectators could not print out tickets and crashing the wifi in the stadium. The key targets also included broadcasters, a ski resort, Olympic officials, service providers and sponsors of the games in 2018, meaning the objects of the attacks were not just in Korea. The GRU also deployed data-deletion malware against the winter Games IT systems and targeted devices across South Korea using VPNFilter malware. The UK assumes that the reconnaissance work for the summer Olympics – including spearphishing to gather key account details, setting up fake websites and researching individual account security – was designed to mount the same form of disruption, making the Games a logistical nightmare for business, spectators and athletes.


What intelligent workload balancing means for RPA

“To be truly effective, a bot must be able to work across a wide set of parameters. Let’s say, for example, a rule involves a bot to complete work for goods returned that are less than $100 in value, but during peak times when returns are high, the rules may dynamically change the threshold to a higher number. The bot should still be able to perform all the necessary steps for that amount of approval without having to be reconfigured every time.” Gopal Ramasubramanian, senior director, intelligent automation & technology at Cognizant, added: “If there are 100,000 transactions that need to be performed and instead of manually assigning transactions to different robots, the intelligent workload balancing feature of the RPA platform will automatically distribute the 100,000 transactions across different robots and ensure transactions are completed as soon as possible. “If a service level agreement (SLA) is tied to the completion of these transactions and the robots will not be able to meet the SLA, intelligent workload balancing can also commission additional robots on demand to distribute the workload and ensure any given task is completed on time.”



Quote for the day:

"You can build a throne with bayonets, but you can_t sit on it for long." -- Boris Yeltsin

Daily Tech Digest - October 19, 2020

How To Build Out a Successful Multi-Cloud Strategy

While a multi-cloud approach can deliver serious value in terms of resiliency, flexibility and cost savings, making sure you’re choosing the right providers requires a comprehensive assessment. Luckily, all main cloud vendors offer free trial services so you can establish which ones best fit your needs and see how they work with each other. It will pay to conduct proofs-of-concept using the free trials and run your data and code on each provider. You also need to make sure that you’re able to move your data and code around easily during the trials. It’s also important to remember that each cloud provider has different strengths—one company’s best option is not necessarily the best choice for you. For example, if your startup is heavily reliant on running artificial intelligence (AI) and machine learning (ML) applications, you might opt for Google Cloud’s AI open source platform. Or perhaps you require an international network of data centers, minimal latency and data privacy compliance for certain geographies for your globally used app. Here’s where AWS could step in. On the other hand, you might need your cloud applications to seamlessly integrate with the various Microsoft tools that you already use. This would make the case for Microsoft Azure.


How data and technology can strengthen company culture

Remote working exposed another potential weakness holding back teams from realising their potential – employee expertise that isn’t being shared. Under the lockdown, many companies realised that knowledge and experience within their workforce were highly concentrated within specific offices, regions, teams, or employees. How can these valuable insights be shared seamlessly across internal networks? A truly collaborative company culture must go beyond limited solutions, such as excessive video calls, which run the risk of burning people out. Collaboration tools that support culture have to be chosen based on their effectiveness at improving interactions, bridging gaps and simplifying knowledge sharing. ... While revamping strategies in recent months, many companies have started to prioritise customer retention and expansion over new customer acquisition, given the state of the economy. Data and technology can help employees adapt to this transition. Investing in tools that empower employees gives them the confidence, knowledge and skills they need to deliver maximum customer value. This in turn boosts customer satisfaction as staff deliver an engaging and consistent experience each time they connect.


Cloud environment complexity has surpassed human ability to manage

“The benefits of IT and business automation extend far beyond cost savings. Organizations need this capability – to drive revenue, stay connected with customers, and keep employees productive – or they face extinction,” said Bernd Greifeneder, CTO at Dynatrace. “Increased automation enables digital teams to take full advantage of the ever-growing volume and variety of observability data from their increasingly complex, multicloud, containerized environments. With the right observability platform, teams can turn this data into actionable answers, driving a cultural change across the organization and freeing up their scarce engineering resources to focus on what matters most – customers and the business.” ... 93% of CIOs said AI-assistance will be critical to IT’s ability to cope with increasing workloads and deliver maximum value to the business. CIOs expect automation in cloud and IT operations will reduce the amount of time spent ‘keeping the lights on’ by 38%, saving organizations $2 million per year, on average. Despite this advantage, just 19% of all repeatable operations processes for digital experience management and observability have been automated. “History has shown successful organizations use disruptive moments to their advantage,” added Greifeneder.


A New Risk Vector: The Enterprise of Things

The ultimate goal should be the implementation of a process for formal review of cybersecurity risk and readout to the governance, risk, and compliance (GRC) and audit committee. Each of these steps must be undertaken on an ongoing basis, instead of being viewed as a point-in-time exercise. Today's cybersecurity landscape, with new technologies and evolving adversary trade craft, demands a continuous review of risk by boards, as well as the constant re-evaluation of the security budget allocation against rising risk areas. to ensure that every dollar spent on cybersecurity directly buys down those areas of greatest risk.  We are beginning to see some positive trends in this direction. Nearly every large public company board of directors today has made cyber-risk an element either of the audit committee, risk committee, or safety and security committee. The CISO is also getting visibility at the board level, in many cases presenting at least once if not multiple times a year. Meanwhile, shareholders are beginning to ask the tough questions during annual meetings about what cybersecurity measures are being implemented. In today's landscape, each of these conversations about cyber-risk at the board level must include a discussion about the Enterprise of Things given the materiality of risk.



FreedomFI: Do-it-yourself open-source 5G networking

FreedomFi offers a couple of options to get started with open-source private cellular through their website. All proceeds will be reinvested towards building up the Magma's project open-source software code. Sponsors contributing $300 towards the project will receive a beta FreedomFi gateway and limited, free access to the Citizens Broadband Radio Service (CBRS) shared spectrum in the 3.5 GHz "innovation band." Despite the name "good-buddy," CBRS has nothing to do with the CB radio service used by amateurs and truckers for two-way voice communications. CB lives on in the United States in the 27MHz band. Those contributing at $1,000 dollars will get support with a "network up" guarantee, offering FreedomFi guidance over a series of Zoom sessions. The company guarantees they won't give up until you get a connection. FreedomFi will be demonstrating an end-to-end private cellular network deployment during their upcoming keynote at the Open Infrastructure Summit and publishing step-by-step instructions on the company blog. This isn't just a hopeful idea being launched on a wing and a prayer. WiConnect Wireless is already working with it. "We operate hundreds of towers, providing fixed wireless access in rural areas of Wisconsin," said Dave Bagett, WiConnect's president.


Why We Must Unshackle AI From the Boundaries of Human Knowledge

Artificial intelligence (AI) has made astonishing progress in the last decade. AI can now drive cars, diagnose diseases from medical images, recommend movies, even whom you should date, make investment decisions, and create art that people have sold at auction. A lot of research today, however, focuses on teaching AI to do things the way we do them. For example, computer vision and natural language processing – two of the hottest research areas in the field – deal with building AI models that can see like humans and use language like humans. But instead of teaching computers to imitate human thought, the time has now come to let them evolve on their own, so instead of becoming like us, they have a chance to become better than us. Supervised learning has thus far been the most common approach to machine learning, where algorithms learn from datasets containing pairs of samples and labels. For example, consider a dataset of enquiries (not conversions) for an insurance website with information about a person’s age, occupation, city, income, etc. plus a label indicating whether the person eventually purchased the insurance.


The Race for Intelligent AI

The key takeaway is that fundamentally BERT and GPT3 have the same structure in terms of information flow. Although attention layers in transformers can distribute information in a way that a normal neural network layer cannot, it still retains the fundamental property that it passes forward information from input to output. The first problem with feed forward neural nets is that they are inefficient. When processing information, the processing chain can often be broken down into multiple small repetitive tasks. For example, addition is a cyclical process, where single digit adders, or in a binary system full adders, can be used together to compute the final result. In a linear information system, to add three numbers there would have to be three adders chained together; this is not efficient, especially for neural networks, which would have to learn each adder unit. This is inefficient when it is possible to learn one unit and reuse it. This is also not how back propagation tends to learn, the neural network would try to create a hierarchical decomposition of the process, which in this case would not ‘scale’ to more digits. Another issue with using feed forward neural networks to simulate “human level intelligence” is thinking. Thinking is an optimization process.


Why Agile Transformations sometimes fail

The attitude regarding Agile adoption that is represented by top management impacts the whole process. Disengaged management is the most common reason for my ranking. There were not too many examples in my career when the bottom-up Agile transformation was successful. Usually, top management is at some point aware of Agile activities in the company, Scrum adoption, although they leave it to the teams. One of the frequent reasons for this behavior is that top management is not acquainted with the understanding that Agility is beneficial for business, product, and the most important – customers/users. They consider Agile, Scrum and Lean to be things that might improve delivery and teams' productivity. Let's imagine the situation when a large number of Scrum Masters reported the same impediment. It becomes an organizational impediment. How would you resolve it when decision-makers are not interested in it? What would be the impact on teams' engagement and product delivery? Active management that fosters Agility and empirical way, and actively participates in the whole process is a secret ingredient that makes the transition more realistic. Another observation I have made is a strong focus on delivery, productivity, technology and effectiveness.


The encryption war is on again, and this time government has a new strategy

So what's going on here? Adding two new countries -- Japan and India -- the statement suggests that more governments are getting worried, but the tone is slightly different now. Perhaps governments are trying a less direct approach this time, and hoping to put pressure on tech companies in a different way. "I find it interesting that the rhetoric has softened slightly," says Professor Alan Woodward of the University of Surrey. "They are no longer saying 'do something or else'". What this note tries to do is put the ball firmly back in the tech companies' court, Woodward says, by implying that big tech is putting people at risk by not acceding to their demands -- a potentially effective tactic in building a public consensus against the tech companies. "It seems extraordinary that we're having this discussion yet again, but I think that the politicians feel they are gathering a head of steam with which to put pressure on the big tech companies," he says. Even if police and intelligence agencies can't always get encrypted messages from tech companies, they certainly aren't without other powers. The UK recently passed legislation giving law enforcement wide-ranging powers to hack into computer systems in search of data.



Code Security: SAST, Shift-Left, DevSecOps and Beyond

One of the most important elements in DevSecOps revolves around a project’s branching strategy. In addition to the main branch, every developer uses their own separate branch. They develop their code and then merge it back into that main branch. A key requirement for the main branch is to maintain zero unresolved warnings so that it passes all functional testing. Therefore, before a developer on an individual branch can submit their work, it also needs to pass all functional tests. And all static analysis tests need to pass. When a pull request or merge request has unresolved warnings, it is rejected, must be fixed and resubmitted. These include functional test case failures and static analysis warnings. Functional test failures must be fixed. However, the root cause of the failure may be hard to find. A functional test error might say, “Input A should generate output B,” but C comes out instead, but there is no indication as to which piece of code to change. Static analysis, on the other hand, will reveal exactly where there is a memory leak and will provide detailed explanations for each warning. This is one way in which static analysis can help DevSecOps deliver the best and most secure code. Finally, let’s review Lean and shift-left, and see how they are connected.



Quote for the day:

'The mediocre leader tells; The good leader explains; The superior leader demonstrates; and The great leader inspires." -- Buchholz and Roth

Daily Tech Digest - October 18, 2020

How Robotic Process Automation Can Become More Intelligent

Artificial Intelligence (AI) and its intrinsic disciplines, including Machine Learning (ML), Natural Language Processing (NLP), and so forth, help to acquire the learning and decision-making abilities in an RPA task. Basically, RPA is for doing. Artificial intelligence is for contemplating ‘what should be done’. Artificial intelligence makes RPA intelligent. Together, these advances offer ascent to Cognitive Automation, which automates many use cases, which were just inconceivable before. The most recent transformation was the point when the virtualized platforms permitted the expansion and expulsion of assets required for processes dependent on the workloads. This permitted the organizations to investigate opportunities to characterize their processes based on automated rules. This was the development of Robotic process automation. RPA goes above and beyond making the monotonous process automated so human intercession is lost. A straightforward application for this could be rule-based reactions you need to accommodate certain work processes. When you code in the Rules once they don’t need any kind of intervention and the RPA deals with everything. Organizations have profited by executing RPA based solutions and processes to reduce expenses multiple times.


So You Want to Be a Data Protection Officer?

The GDPR states the Data Protection Officer must be capable of performing their duties independently, and may not be “penalized or dismissed” for performing those duties. (The DPO’s loyalties are to the general public, not the business. The DPO’s salary can be considered a tax for doing business on the internet.) Philip Yannella, a Philadelphia attorney with Ballard Spahr, said: “A Data Protection Officer can’t be fired because of the decisions he or she makes in that role. That spooks some U.S. companies, which are used to employment at will. If a Data Protection Officer is someone within an organization, he or she should be an expert on GDPR and data privacy.” Not having a Data Protection Officer could get quite expensive, resulting in stiff fines on data processors and controllers for noncompliance. Fines are administered by member state supervisory authorities who have received a complaint. Yannella went on to say, “No one yet knows what kind of behavior would trigger a big fine. A lot of companies are waiting to see how this all shakes out and are standing by to see what kinds of companies and activities the EU regulators focus on with early enforcement actions.”


The State of Enterprise Architecture and IT as a Whole

EA is an enterprise-wide, business-driven, holistic practice to help steer companies towards their desired longer-term state, to respond to planned and unplanned business and technology change. Embracing EA Principles is a central part of EA, though rarely adopted. The focus in those early days was reducing complexity by addressing duplication, overlap, and legacy technology. With the line between technology and applications blurring, and application sprawl happening almost everywhere, a focus on rationalizing the application portfolio soon emerged. I would love to say that EA adoption was smooth, but there were many distractions and competing industry trends, everything from ERP to ITIL to innovation. The focus was on delivery and operations, and there was little mindshare for strategic, big-picture, and longer-term thinking. Practitioners were rewarded only for supporting project delivery. Many left the practice. And frankly, a lot of people who didn’t have EA-skills were thrust into the role. That further exacerbated adoption challenges and defined the delivery-oriented technology-focused path EA would follow. It is still dominant today.


Managing and Governing Data in Today’s Cloud-First Financial Services Industry

Artificial intelligence and machine learning technologies have proven to accelerate the ability for banks, insurance companies, and retail brokerages to successfully combat fraud, manage risk, cross sell and upsell, and provide tailored services to existing clients. To harness the power and potential of these solutions, financial institutions will look to leverage external data from third-party vendors and partners and in-house data to mine for the best answers and recommendations. Today’s cloud-native and cloud-first solutions offer financial institutions the ability to capture, process, analyze, and leverage the intelligence from this data much faster, more efficiently, and more effectively than trying to do it internally. Improving Customer Experience Through Digital Modernization: Banks and insurance companies have been modernizing and/or replacing legacy core systems, many of which have been around for decades, with cloud-native and cloud-first solutions. These include offerings from organizations like FIS Global, nCino, and my former employer EllieMae in the banking industry, and offerings from Guidewire and DuckCreek for cloud-native policy administration, claims, and underwriting solutions in the insurance sector.


Optimizing Privacy Management through Data Governance

Data governance is responsible for ensuring data assets are of sufficient quality, and that access is managed appropriately to reduce the risk of misuse, theft, or loss. Data governance is also responsible for defining guidelines, policies, and standards for data acquisition, architecture, operations, and retention among other design topics. In the next blog post, we will discuss further the segregation of duties shown in figure 1; however, at this point it is important to note that modern data governance programs need to take a holistic view to guide the organization to bake quality and privacy controls into the design of products and services. Privacy by design is an important concept to understand and a requirement of modern privacy regulations. At the simplest level it means that processes and products that collect and or process personal information must be architected and managed in a way that provides appropriate protection, so that individuals are not harmed by the processing of their information nor by a privacy breach. Malice is not present in all privacy breaches. Organizations have experienced breaches related to how they managed physical records containing personal information, because staff were not trained to properly handle the information.


The Definitive Guide to Delivering Value from Data using DataOps

The DataOps solution to the hand-over problem is to allow every stakeholder full access to all process phases and tie their success to the overall success of the entire end-to-end process ... Value delivery is a sprint, not a relay. Treat the data-to-value process as a unified team sprint to the finish line (business value) rather than a relay race where specialists pass the baton to each other in order to get to the goal. It is best to have a unified team spanning multiple areas of expertise responsible for overall value delivery instead of single specialized groups responsible for a single process phase. ... A well architected data infrastructure accelerates delivery times, maximizes output, and empowers individuals. The DevOps movement played an influential role in decoupling and modularizing software infrastructure from single monolithic applications to multiple fail-safe microservices. DataOps aims to bring the same influence to data infrastructure and technology. ... At its core, DataOps aims to promote a culture of trust, empowerment, and efficiency in organizations. A successful DataOps transformation needs strategic buy-in starting from C-suite executives to individual contributors.


How do Organizations Choose a Cloud Service Provider? Is it Like an Arranged Marriage?

While not as critical a decision as marriage, most organizations today face a similar trust-based dilemma- which cloud service provider to trust with their data? There is no debate over the clear value drivers for cloud computing- performance, cost and scalability to name a few. However, the lack of control and oversight could make organizations hesitant to hand over their most valuable asset- information, to a third party, trusting they have adequate information protection controls in place. With any trust-based decision, external validation can play an important role. Arranged marriages rely on positive feedback and references, mostly attested by the matchmaker. It also relies on supporting evidence such as corroborations of relatives and more tangible factors such as education/career history of the potential bride/ groom. In case of cloud service providers, independent validation such as certifications, attestation or other information protection audits could make or break a deal. The notion of cloud computing may have existed as far back as the 1960s but cloud services took the form we know of today with the launch of services from big players such as Amazon, Google and Microsoft in 2006-2007. 


Professor creates cybersecurity camp to inspire girls to choose STEM careers

The way I got into cybersecurity, I got into cybersecurity I would say maybe five years ago. But in the field of IT, I always like to pull things apart, and figure out how they work and problem solve. I was always in the field of IT. I worked as a programmer at IBM for a couple of years, and then I segued into the academy, because I felt I could be more impactful in front of a classroom. In an IBM setting and programming setting, I noticed I was one only woman and woman of color in that field. I said, "OK, I need to do something to change this." I went into the Academy and said, "Maybe if I was an instructor, I could then empower more young women to go and pursue this field of study." Then five years, as time went on, the cybersecurity discipline really became very hot. And really, that was very, very intriguing, how hackers were hacking in and sabotaging systems. Again, it was like a puzzle, problem solving, how can we, out-think the hacker, and how can we make things safe? That became very, very intriguing to me. Then when I wrote this grant, the GenCyber grant, which Dr. Li-Chiou Chen, my chair at the time, recommended that I explore a grant for GenCyber and I submitted it, and I was shocked that I won the grant.


Germany’s blockchain solution hopes to remedy energy sector limitations

If successfully executed, Morris explained that BMIL could serve as the basis for a wide range of DERs supporting both Germany’s wholesale and retail electricity markets: “This will make it easy, efficient and low cost for any DER in Germany to participate in the energy market. Grid operators and utility providers will also gain access to an untapped decarbonized Germany energy system.” However, technical challenges remain. Mamel from DENA noted that BMIL is a project built around the premise of interoperability — one of blockchain’s greatest challenges to date. While DENA is technology agnostic, Mamel explained that DENA aims to test a solution that will be applicable to the German energy sector, which already consists of a decentralized framework with many industry players using different standards. As such, DENA decided to take an interoperability approach to drive Germany’s energy economy, testing two blockchain development environments in BMIL. Both Ethereum and Substrate, the blockchain-building framework for Polkadot, will be applied, along with different concepts regarding decentralized identity protocols.


How to Overcome the Challenges of Using a Data Vault

Within the data vault approach, there are certain layers of data. These range from the source systems where data originates, to a staging area where data arrives from the source system, modeled according to the original structure, to the core data warehouse, which contains the raw vault, a layer that allows tracing back to the original source system data, and the business vault, a semantic layer where business rules are implemented. Finally, there are data marts, which are structured based on the requirements of the business. For example, there could be a finance data mart or a marketing data mart, holding the relevant data for analysis purposes. Out of these layers, the staging area and the raw vault are best suited to automation. The data vault modeling technique brings ultimate flexibility by separating the business keys, which uniquely identify each business entity and do not change often, from their attributes. These results, as mentioned earlier, in many more data objects being in the model, but also provides a data model that can be highly responsive to changes, such as the integration of new data sources and business rules.



Quote for the day:

"The closer you get to excellence in your life, the more friends you'll lose. People love average and despise greatness." -- Tony Gaskins

Daily Tech Digest - October 17, 2020

Data literacy skills key to cost savings, revenue growth

"The bottom line is that bad data is costly, because decision-makers, managers, data scientists and others who have to work with data have to compensate for that bad data," she said. "That's time-consuming, but the real cost of that bad data is that it's an obstacle in their journey to become insights-driven." To prevent those losses -- and to help people make data-driven decisions that have the potential to spur revenue growth -- organizations should enable employees with data literacy skills. Employees need an education in data. Data-driven companies simply grow faster, Belissent said, noting that Forrester has studied hundreds of companies. And organizations do want to be data-driven, she continued, adding that 88% of those surveyed by Forrester want to improve the use of data insights in their decision-making. But if their data is low quality, or if the data isn't there at all, it serves as a significant impediment to growth. And in fact, according to Forrester's research, fewer than half of all decisions are made based on quantitative analysis. Organizations, therefore, need to implement training programs to give employees the data literacy skills -- the ability to evaluate, work with, communicate and apply data -- to do their jobs.


Real Time APIs in the Context of Apache Kafka

One of the challenges that we have always faced in building applications, and systems as a whole, is how to exchange information between them efficiently whilst retaining the flexibility to modify the interfaces without undue impact elsewhere. The more specific and streamlined an interface, the likelihood that it is so bespoke that to change it would require a complete rewrite. The inverse also holds; generic integration patterns may be adaptable and widely supported, but at the cost of performance. Events offer a Goldilocks-style approach in which real-time APIs can be used as the foundation for applications which is flexible yet performant; loosely-coupled yet efficient. Events can be considered as the building blocks of most other data structures. Generally speaking, they record the fact that something has happened and the point in time at which it occurred. An event can capture this information at various levels of detail: from a simple notification to a rich event describing the full state of what has happened. From events, we can aggregate up to create state—the kind of state that we know and love from its place in RDBMS and NoSQL stores. 


Emotional AI — can chatbots convey empathy?

Maya Angelou once said — “I’ve learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.” So, since emotions are our most human quality, what if we could teach artificial intelligence (AI) to understand our feelings? In recent years, AI and machine learning algorithms have held the world spellbound with the rapid pace of development and integration in various industries and verticals. The goal of AI research has shifted over the years; to compute what humans could not, to beat us in specific tasks, and most recently to create an algorithm that can show how it’s working. To put how rapidly AI is growing in context, a Pew Research Center study reports that by 2025, AI and robotics will permeate most segments of daily life, while another an Oxford University Study projects that within the next 25 years, developed nations will experience job loss rates of up to 47%. AI is displacing the roles of both white and blue-collar workers, from travel agents to bank tellers, gas station attendants to factory workers. This has tremendous implications for industries such as home maintenance, transport and logistics, healthcare, and most significantly, customer service.


The brain of the SIEM and SOAR

What the nerves need is a brain that can receive and interpret their signals. An XDR engine, powered by Bayesian reasoning, is a machine-powered brain that can investigate any output from the SIEM or SOAR at speed and scale. This replaces the traditional Boolean logic (that is searching for things that IT teams know to be somewhat suspicious) with a much richer way to reason about the data. This additional layer of understanding will work out of the box with the products an organization already has in place to provide key correlation and context. For instance, imagine that a malicious act occurs. That malicious act is going to be observed by multiple types of sensors. All of that information needs to be put together, along with the context of the internal systems, the external systems and all of the other things that integrate at that point. This gives the system the information needed to know the who, what, when, where, why and how of the event. This is what the system’s brain does. It boils all of the data down to: “I see someone bad doing something bad. I have discovered them. And now I am going to manage them out.” What the XDR brain is going to give the IT security team is more accurate, consistent results, fewer false positives and faster investigation times.


Pachyderm and the power of GitHub Actions: MLOps meets DevOps

The kinds of problems we face in machine learning are fundamentally different than the ones we face in traditional software coding. Functional issues, like race conditions, infinite loops, and buffer overflows, don’t come into play with machine learning models. Instead, errors come from edge cases, lack of data coverage, adversarial assault on the logic of a model, or overfitting. Edge cases are the reason so many organizations are racing to build AI Red Teams to diagnose problems before things go horribly wrong. It’s simply not enough to port your CI/CD and infrastructure code to machine learning workflows and call it done. Handling this new generation of machine learning operations (MLOps) problems requires a brand new set of tools that focus on the gap between code-focused operations and MLOps. The key difference is data. We need to version our data and datasets in tandem with the code. That means we need tools that specifically focus on data versioning, model training, production monitoring, and many others unique to the challenges of machine learning at scale. Luckily, we have a strong tool for MLOps that does seamless data version control: Pachyderm. 


Microsoft: Learn JavaScript Node.js with this new free course

The Node.js course teaches beginners what they need to know to build things like web servers, microservices, command-line apps, web interfaces, drivers for database access, desktop apps using Electron, IoT client and server libraries for single-board computers like Raspberry Pi, machine-learning models and more. Yohan Lasorsa, a senior Microsoft cloud developer advocate and main host of the Node.js series, recommends students complete the JavaScript video series before starting the Node.js series. To accompany the video tutorials, Microsoft has also published an extensive interactive Node.js course consisting of five modules. The modules include an introduction to Node.js that explains what it is, how it works, and when it could be useful. The second module explains how to use dependencies obtained from the NPM registry, while the third takes students through debugging Node.js apps with the built-in debugger and the debugger available in Microsoft's Visual Studio Code (VS Code) editor. The fourth and fifth modules teach students how to work with files and directories in Node.js apps and how to build a web API with Node.js and the Express.js framework for adding things like authentication.


Exponential growth in DDoS attack volumes

We recognize the scale of potential DDoS attacks can be daunting. Fortunately, by deploying Google Cloud Armor integrated into our Cloud Load Balancing service—which can scale to absorb massive DDoS attacks—you can protect services deployed in Google Cloud, other clouds, or on-premise from attacks. We recently announced Cloud Armor Managed Protection, which enables users to further simplify their deployments, manage costs, and reduce overall DDoS and application security risk. Having sufficient capacity to absorb the largest attacks is just one part of a comprehensive DDoS mitigation strategy. In addition to providing scalability, our load balancer terminates network connections on our global edge, only sending well-formed requests on to backend infrastructure. As a result it can automatically filter many types of volumetric attacks. For example, UDP amplification attacks, synfloods, and some application-layer attacks will be silently dropped. The next line of defense is the Cloud Armor WAF, which provides built-in rules for common attacks, plus the ability to deploy custom rules to drop abusive application layer requests using a broad set of HTTP semantics.


Best Practices for Managing Remote IT Teams

Many DBAs and developers have been working remotely for months now, but as IT budgets grow tighter, they’ll need to do more with less. Ensuring DBAs have the ability to monitor the database from anywhere will be a core part of a continued successful remote working strategy. There are many reasons for database professionals to embrace remote monitoring, whether it’s migrating to the cloud, adapting to new challenges, keeping an eye on multiple instances in many environments or gaining fine-grained access to monitoring data. ... Cloud adoption is up significantly this year as development teams turn to it, particularly for greenfield projects. But with all of that data migration, database professionals are struggling with being able to monitor cloud-based servers alongside on-premises servers, and having a distributed team doesn’t make it easier. Adopting remote monitoring tools can simplify monitoring of the cloud—once you’re monitoring a remote database server it doesn’t matter where the server is. It’s impossible to say what might happen next month or even next year, but as companies grapple with these cloud challenges, advanced remote monitoring tools can help monitor disparate, hybrid environments from one screen.


Hearing The World Through Machine Learning

With ML, companies can apply cutting-edge technology to transform an age-old problem. Startups are leveraging deep learning and advanced signal processing at a granularity not previously possible to improve hearing quality.  Some incumbent hearing aid companies have recently touted their ability to add “AI” features such as Alexa integrations and step counters. Unfortunately, these features don’t seem to improve actual hearing quality nor take advantage of true ML capabilities beyond generating marketing buzz. ... In my conversation with Andre Esteva, the Head of Medical AI at Salesforce, he noted that “traditional approaches have been limited by extensive manual efforts to acquire data, hand-craft it into a usable format, prepare rudimentary algorithms and deploy them to devices. In contrast, ML has a natural flywheel effect in which devices collect data at scale, ML training protocols automatically process the data, update themselves and redeploy. The effect is a significant reduction in product feedback cycles and an increase in the range of capabilities available. The beauty of this approach is that the underlying intelligence improves over time as the neural nets go through iterative training.”


Q&A on the Book Leading with Uncommon Sense

It is a three-step practice that includes pausing, introspecting, and acting. It requires leaders to continually cycle through the three steps: pause, introspect and act. At the core of the practice is the need to slow down. Leaders can pause both in the moment when reacting to a difficult situation or in a planned, proactive way to prepare for challenges and to harvest learnings. When introspecting, leaders look inward and examine their own thoughts or feelings, carefully investigating what is happening with their thinking. Introspecting allows leaders to pay attention to four areas: recognizing what is outside of our awareness, learning from our emotions, tracking the impact of social identities, and embracing uncertainty. After investigating these four areas and gathering useful information, leaders are in a better position to take action. Finally, by pausing and introspecting, we argue that leaders are in a better position to take action. In addition, we know that leaders cannot allow themselves to be paralyzed by the complexities of any given moment and that they must have the courage to make decisions and take action in the very face of that complexity.



Quote for the day:

"Just because you can't have what you want NOW doesn't mean never. Be patient, persistent and resourceful." -- Tim Fargo

Daily Tech Digest - October 16, 2020

New Emotet attacks use fake Windows Update lures

According to an update from the Cryptolaemus group, since yesterday, these Emotet lures have been spammed in massive numbers to users located all over the world. Per this report, on some infected hosts, Emotet installed the TrickBot trojan, confirming a ZDNet report from earlier this week that the TrickBot botnet survived a recent takedown attempt from Microsoft and its partners. These boobytrapped documents are being sent from emails with spoofed identities, appearing to come from acquaintances and business partners. Furthermore, Emotet often uses a technique called conversation hijacking, through which it steals email threads from infected hosts, inserts itself in the thread with a reply spoofing one of the participants, and adding the boobytrapped Office documents as attachments. The technique is hard to pick up, especially among users who work with business emails on a daily basis, and that is why Emotet very often manages to infect corporate or government networks on a regular basis. In these cases, training and awareness is the best way to prevent Emotet attacks. Users who work with emails on a regular basis should be made aware of the danger of enabling macros inside documents, a feature that is very rarely used for legitimate purposes.


Prolific Cybercrime Group Now Focused on Ransomware

Overall, the group does not display sophisticated tactics, techniques and procedures (TTPs), but they are aggressive in their attempts to gain a foothold in companies, says Kimberly Goody, senior manager of the Mandiant threat intelligence financial crime team at FireEye. "The main thing that sets this group apart from our perspective is how widespread their campaigns are," she says. "They are sophisticated, but they have a wide reach. And their constant evolution of their TTPs—even though minor—can prevent organizations from being able to adequately defend against their spam campaigns." The group also highlights a trend observed by FireEye. Since early 2019, financial cybercrime groups once focused on stealing payment-card data are now shifting to compromising corporate networks, infecting a significant number of systems with ransomware, and then extorting the business for large sums, Goody says. "Point of sale intrusions were very profitable, and we saw actors such as FIN6 and FIN7—all the way back to FIN5—they were targeting payment card data," Goody says.


Agile: 4 signs your transformation is in trouble

True culture change requires more than a shot in the arm. The shot in the arm jolts the team awake and gets them moving, but from that moment the old culture drags everyone back where they started, so you have to fight against it. If you started with fun and creativity (or just never got there), look for opportunities to light the path toward a more creative and fun world at a leadership level. Virtual happy hours are fine, but, especially during COVID, you need to go further than that to set the example. Maybe you throw in a game. Maybe you have an appetizer delivered to each person’s house. Maybe you give each person $30 to surprise a teammate with a personal encouragement. No matter the approach, bring back the fun and joy and you’ll boost creativity from your agile teams. When you go to the gym and you only lift weights to strengthen your biceps, they get stronger while your leg muscles stay the same (or get weaker). The same thing happens in agile and produces similarly disproportionate results. Focusing on agility in one part of the organization (like the software teams), but not the leadership that fills their funnel, actually builds fragility into your business.


Critical SonicWall VPN Portal Bug Allows DoS, Worming RCE

“VPN bugs are tremendously dangerous for a bunch of reasons,” he told Threatpost. “These systems expose entry points into sensitive networks and there is very little in the way of security introspection tools for system admins to recognize when a breach has occurred. Attackers can breach a VPN and then spend months mapping out a target network before deploying ransomware or making extortion demands.” Adding insult to injury, this particular flaw exists in a pre-authentication routine, and within a component (SSL VPN) which is typically exposed to the public internet. “The most notable aspect of this vulnerability is that the VPN portal can be exploited without knowing a username or password,” Young told Threatpost. “It is trivial to force a system to reboot…An attacker can simply send crafted requests to the SonicWALL HTTP(S) service and trigger memory corruption.” However, he added that a code-execution attack does require a bit more work. “Tripwire VERT has also confirmed the ability to divert execution flow through stack corruption, indicating that a code-execution exploit is likely feasible,” he wrote, adding in an interview that an attacker would need to also leverage an information leak and a bit of analysis to pull it off.


Avoiding Serverless Anti-Patterns with Observability

New adopters of serverless are more susceptible to anti-patterns, so not being aware of — or not understanding the effect of — these anti-patterns, may be frustrating. So it acts as a barrier to serverless adoption. Observability mitigates this black box effect, and understanding the possible anti-patterns allows us to monitor the right metrics and take the right actions. Therefore, this article goes through some of the major anti-patterns unique to serverless and describes how the right strategy in observability can cushion the impact of anti-patterns creeping into your serverless architectures. Serverless applications tend to work best when asynchronous. This is a concept that was preached by Eric Johnson in his talk at ServerlessDays Istanbul, titled “Thinking Async with Serverless.” He later on went to present a longer version of the talk at ServerlessDays Nashville. As teams and companies begin to adopt serverless, one of the biggest mistakes they can make is designing their architecture while still having a monolith mentality. This results in a lift and shift of their previous architectures. This means the introduction of major controller functions and misplaced await functions.


Only the Agile Survive in Today’s Ever-Changing Business Environment

It’s almost inevitable that you’ll end up overlooking a vital document or missing a key contract in the hectic rush. Scrabbling around for all the relevant files and folders causes your confidence to leak away as you feel that you’re just not ready for this deal, and I’ve often seen that become a self-fulfilling prophecy. One company I consulted for learned this lesson when a well-known international consumer goods brand showed interest in buying their logistics business. Although the CEO had been hoping to arrange an exit on favorable terms, the CFO wasn’t on board and hadn’t made any advance preparations for due diligence situations. The prospective buyer was only in town for three days and wanted to look over their documents and agree on a preliminary contract before she left, but the CFO was so rattled by the pressure that he presented a profit and loss statement from the wrong year. The buyer declined to continue with the negotiations, and the CFO was left knowing that he’d let a great deal slip through his fingers simply because he didn’t have all of his books digitized and organized in a secure, centralized resource.


Singapore Launches IoT Cybersecurity Labelling

The Cybersecurity Labelling Scheme will focus first on Wi-Fi routers and smart home hubs, according to the Cyber Security Agency of Singapore. "Amid the growth in number of IoT products in the market, and in view of the short time-to-market and quick obsolescence, many consumer IoT products have been designed to optimize functionality and cost over security," the Cyber Security Agency says. "As a result, many devices are being sold with poor cybersecurity provisions, with little to no security features built-in." ... Singapore's program is voluntary for manufacturers for now, but the nation intends eventually to make it mandatory. The testing has four rating levels, and the CSA has offered detailed information for manufacturers. Developers can make declarations that their products conform with the first two levels. The first level means a product meets basic security requirements, such as mandating the use of unique passwords and delivering software updates as dictated by the European Telecommunications Standards Institute's EN 303 645 standard. The second level encompasses the first level requirements plus following the IoT Cyber Security Guide developed by Singapore's Infocomm Media Development Authority, or IMDA.


Why AI can’t ever reach its full potential without a physical body

A designer can’t effectively build a software sense-of-self for a robot. If a subjective viewpoint were designed in from the outset, it would be the designer’s own viewpoint, and it would also need to learn and cope with experiences unknown to the designer. So what we need to design is a framework that supports the learning of a subjective viewpoint. Fortunately, there is a way out of these difficulties. Humans face exactly the same problems but they don’t solve them all at once. The first years of infancy display incredible developmental progress, during which we learn how to control our bodies and how to perceive and experience objects, agents and environments. We also learn how to act and the consequences of acts and interactions. Research in the new field of developmental robotics is now exploring how robots can learn from scratch, like infants. The first stages involve discovering the properties of passive objects and the “physics” of the robot’s world. Later on, robots note and copy interactions with agents (carers), followed by gradually more complex modelling of the self in context. In my new book, I explore the experiments in this field.


Singapore releases AI ethics, governance reference guide

Noting that AI sought to inject intelligence into machines to mimic human action and thought, SCS President Chong Yoke Sin noted that rogue or misaligned AI algorithms with unintended bias could cause significant damage. This underscored the importance of ensuring AI was used ethically. "On the other hand, stifling innovation in the use of AI will be disastrous as the new economy will increasingly leverage AI," Chong said, as she stressed the need for a balanced approach that prioritised human safety and interests. Speaking during SCS' Tech3 Forum, Singapore's Minister for Communications and Information S. Iswaran further underscored the need to build trust with the responsible use of AI in order to drive the adoption and extract the most benefits from the technology. "Responsible adoption of AI can boost companies' efficiencies, facilitate decision-making, and help employees upskill into more enriching and meaningful jobs," Iswaran said. "Above all, we want to build a progressive, safe, and trusted AI environment that benefits businesses and workers, and drives economic transformation." The launch of a reference guide would provide businesses access to a counsel of experts proficient in AI ethics and governance, so they could deploy the technology responsibly, the minister said.


How to ensure faster, quality code to ease the development process

If there’s one metric most businesses are focused on when it comes to coding, it’s speed. Tech and dev teams are at the forefront of innovation, and they’re used to moving at a serious pace. Anything that slows down the process of shipping code damages their ability to perform. To move quickly though, and to get from planning to coding in record time, teams need real-time visibility into what’s being worked on and transparent access to the latest updates from the team. Closed-off communication, like email, which limits visibility of information to a handful of people selected by a single sender, isn’t up to the task. Instead, channel-based communication can provide a single-space for developers to collaborate, share priorities and simplify processes in order to speed up testing and deployment. Rather than having to sift through information flying in from different sources, channel-based messaging integrates all existing tools into a single place, meaning developers can increase visibility over deploys and get straight to the information they need. Developers can pull in key material using integrations that plug different apps like Jira and Github right into their discussions.



Quote for the day:

"A coach is someone who can give correction without causing resentment." -- John Wooden

Daily Tech Digest - October 15, 2020

6 Reasons Why Internal Data Centers Won’t Disappear

Most companies are moving to a hybrid computing model, which is a mix of on premises and cloud-based IT. The value of a hybrid computing approach is that it gives organizations agility and flexibility. You have the option of insourcing or outsourcing systems whenever there is a business or technology need do so. By adopting a hybrid strategy, companies can also take advantage of the best strategic, operational and cost options. In some cases, a “best choice” might be to outsource to the cloud. In other cases, an in-house option might be preferable. Here is an example: A large company with a highly customized ERP system from a well-known vendor acquires a smaller company. Operationally, the desire is to move the newly acquired, smaller company onto the enterprise in-house ERP system, but there are so many customized programs and interfaces that the company decides instead to move the new company onto a cloud-based, generic version of the software. The advantage is the newly acquired company gets acclimated to the features and functions of the ERP system. Going forward, the parent company has the option of either migrating the new company over to the corporate ERP system, and being able to perform this migration without being rushed, or deciding to join the newly acquired company by migrating enterprise ERP to the cloud .


What is cryptography? How algorithms keep information secret and safe

Secret key cryptography, sometimes also called symmetric key, is widely used to keep data confidential. It can be very useful for keeping a local hard drive private, for instance; since the same user is generally encrypting and decrypting the protected data, sharing the secret key is not an issue. Secret key cryptography can also be used to keep messages transmitted across the internet confidential; however, to successfully make this happen, you need to deploy our next form of cryptography in tandem with it. ... In public key cryptography, sometimes also called asymmetric key, each participant has two keys. One is public, and is sent to anyone the party wishes to communicate with. That's the key used to encrypt messages. But the other key is private, shared with nobody, and it's necessary to decrypt those messages. To use a metaphor: think of the public key as opening a slot on a mailbox just wide enough to drop a letter in. You give those dimensions to anyone who you think might send you a letter. The private key is what you use to open the mailbox so you can get the letters out. The mathematics of how you can use one key to encrypt a message and another to decrypt it are much less intuitive than the way the key to the Caesar cipher works.


Mitigating Business Risks in Your 5G Deployment

For 5G networks to thrive, the underlying architecture will be distributed in the cloud and will no longer be dependent on dedicated appliances. The corresponding implementation and deployment of the carriers’ networks will evolve to expand capacity, reduce latency, lower costs and reduce necessary power requirements. To reinforce this open environment, organizations using 5G will have to virtualize their network functions, resulting in less control over the physical elements of the networks in exchange for the 5G benefits in infrastructure. Services are also no longer restricted to service providers’ networks and can originate from external network domains. This means that services can rely on physically closer, virtualized network resources to the connected device for more efficient delivery. 5G architectures will rely on a software-defined networking/network functions virtualization (SDN/NFV)-supported foundation for their transition to the cloud. This change to the network infrastructure leads to corresponding deviations to the cyberattack threat landscape. 5G will utilize the concept of network slicing to enable service providers to “slice” portions of a spectrum to offer specialized services for specific device types, all the while remaining in the same physical infrastructure.


Microsoft fights botnet after Office 365 malware attack

According to filed court documents, Microsoft sought permission to take over domains and servers belonging to the malicious Russia-based group. It also wanted legal assent to block IP addresses associated with the plot and prevent the entities behind it from purchasing or leasing servers. The requests were part of a grander plan of action to destroy data stored in the hackers' systems. The intention was first to block access to servers controlling over 1 million infected machines. This move would be a crucial step in halting control of over an additional 250 million breached email addresses. Microsoft has said that Trickbot’s strategy was mostly successful because it used a custom third-party Office 365 app. Tricking users into installing it allowed perpetrators to bypass passwords instead of relying on the OAuth2 token. Through this technique, they could access compromised Microsoft 365 user accounts and sensitive data associated with them, such as email content and contact lists. In the court documents, Microsoft laments that Trickbot used authentic-looking Microsoft email addresses and other company information to malign its clients. It argues that the network used its name and infrastructure for malicious purposes, thereby tarnishing its image.


Breaking Serverless on Purpose with Chaos Engineering

“You should stop when something goes wrong, even if you are not running it in production. You should stop just to understand how you are going to roll back when such things happen,” Samdan said. He echoed what Liz Fong-Jones said in her ChaosConf talk: that you should absolutely intentionally plan when you have your chaos experiments and let everyone know ahead. “You don’t need to surprise other people. You don’t need to surprise other departments. And, most importantly, in production, your customers should know about it,” he said. So if something goes terribly wrong, they aren’t worried because you talked about it ahead and you already had a plan to roll back which you also shared with them. Chaos gets way more complicated in serverless environments, which are highly distributed and event-driven. Risks with serverless tend to come from the services you don’t have insight or control over. Essentially, serverless is chaotic at its heart. With serverless you inherit a whole new set of failures, within its many resources ... He says a common fix for serverless issues is to aim for asynchronous communication whenever possible and then properly tune synchronous timeouts. Other serverless fixes include putting circuit breakers in place and using exponential backoff to find an acceptable rate of pacing retransmissions.


Audit .NET/.NET Core Apps with Audit.NET and AWS QLDB

With the flow of a request through the system new information is added to the audit event, like the component name, the identity or the user name of the executing request, how was the data before it was altered, how is data after modification, timestamps, machine names, a common identifier to correlate the request through the components and any other type of information that might be needed to identify the request with other systems. This operation is vital for some business, so often is considered part of the transaction: the cancellation of a contract is considered successful if also there is a record in the audit log trail. One could rely on the ILogger interfaces to implement this requirement, but there are few problems: it could be easily turned off, failing to send a message to log won't crash the application and it does not have specialized primitives for audit logging. ... Audit.NET is an extensible framework to audit executing operations in .NET and .NET Core. It comes with two types of extensions: the data providers (or the data sinks) and the interaction extensions. The data providers are used to store the audit events into various persistent storages and the interaction extensions are used to create specialized audit events based on the executing context like Entity Framework, MVC, WCF, HttpClient, and many others.


WFH has left workers feeling abandoned.

One in three employees admitted that being away from the office had lowered their morale, with respondents reporting that they feel distracted during their work day, and easily stressed out at work. What's more: there seems to be consensus that employers have not gone far enough in supporting their workforce. Less than a quarter of employees in the US and Europe received guidance from their employer on working remotely on topics ranging from tips on new ways to work, to data security best practices. But despite the potential difficulties of working from home day-in, day-out, HP's research found that office workers are keeping an eye on the bigger picture – and that overall, respondents seemed positive about the future. The majority of employees surveyed agreed that the new ways of working caused by the crisis would allow them to change their work environments for the better. Over the past few months, workers have been gauging what the future holds for their nine-to-five, and preparing accordingly. The survey shows that many employees have identified continuous learning and upskilling as key to their success, and have lost no time in re-training themselves. From leadership skills to foreign languages through IT and tech support knowledge, almost six in ten respondents said that they were currently learning at least one new skill, often through free online programs.


Twitter hack probe leads to call for cybersecurity rules for social media giants

The report concludes this is a problem U.S. lawmakers need to get on and tackle stat — recommending that an oversight council be established (to “designate systemically important social media companies”) and an “appropriate” regulator appointed to ‘monitor and supervise’ the security practices of mainstream social media platforms. “Social media companies have evolved into an indispensable means of communications: more than half of Americans use social media to get news, and connect with colleagues, family, and friends. This evolution calls for a regulatory regime that reflects social media as critical infrastructure,” the NYSDFS writes, before going on to point out there is still “no dedicated state or federal regulator empowered to ensure adequate cybersecurity practices to prevent fraud, disinformation, and other systemic threats to social media giants”. “The Twitter Hack demonstrates, more than anything, the risk to society when systemically important institutions are left to regulate themselves,” it adds. “Protecting systemically important social media against misuse is crucial for all of us — consumers, voters, government, and industry. The time for government action is now.”


Google, Intel Warn on ‘Zero-Click’ Kernel Bug in Linux-Based IoT Devices

The flaw, which Google calls “BleedingTooth,” can be exploited in a “zero-click” attack via specially crafted input, by a local, unauthenticated attacker. This could potentially allow for escalated privileges on affected devices. “A remote attacker in short distance knowing the victim’s bd [Bluetooth] address can send a malicious l2cap [Logical Link Control and Adaptation Layer Protocol] packet and cause denial of service or possibly arbitrary code execution with kernel privileges,” according to a Google post on Github. “Malicious Bluetooth chips can trigger the vulnerability as well.” The flaw (CVE-2020-12351) ranks 8.3 out of 10 on the CVSS scale, making it high-severity. It specifically stems from a heap-based type confusion in net/bluetooth/l2cap_core.c. A type-confusion vulnerability is a specific bug that can lead to out-of-bounds memory access and can lead to code execution or component crashes that an attacker can exploit. In this case, the issue is that there is insufficient validation of user-supplied input within the BlueZ implementation in Linux kernel. Intel, meanwhile, which has placed “significant investment” in BlueZ, addressed the security issue in a Tuesday advisory, recommending that users update the Linux kernel to version 5.9 or later.


There’s no better time to join the quantum computing revolution

It’s an exciting time to be in quantum information science. Investments are growing across the globe, like the recently announced U.S. Quantum Information Science Research Centers, that bring together the best of the public and private sectors to solve the scientific challenges on the path to a commercial-scale quantum computer. While there’s increased research investment worldwide, there are not yet enough skilled developers, engineers, and researchers to take advantage of this emerging quantum revolution.  Here’s where you come in. There’s no better time to start learning about how you can benefit from quantum computing, and solve currently unsolvable questions in the future. Here are some of the resources available to start your journey. Many developers, researchers, and engineers are intrigued by the idea of quantum computing, but may not have started because perhaps they don’t know how to begin, how to apply it, or how to use it in their current applications. We’ve been listening to the growing global community and worked to make the path forward easier. Take advantage of these free self-paced resources to learn the skills you need to get started with quantum.



Quote for the day:

"Tomorrow's leaders will not lead dictating from the front, nor pushing from the back. They will lead from the centre - from the heart" -- Rasheed Ogunlaru