Daily Tech Digest - June 02, 2021

A recurrent neural network that infers the global temporal structure based on local examples

"Every day, we manipulate information about the world to make predictions," Jason Kim, one of the researchers who carried out the study, told TechXplore. "How much longer can I cook this pasta before it becomes soggy? How much later can I leave for work before rush hour? Such information representation and computation broadly fall into the category of working memory. While we can program a computer to build models of pasta texture or commute times, our primary objective was to understand how a neural network learns to build models and make predictions only by observing examples." Kim, his mentor Danielle S. Bassett and the rest of their team showed that the two key mechanisms through which a neural network learns to make predictions are associations and context. For instance, if they wanted to teach their RNN to change the pitch of a song, they fed it the original song and two other versions of it, one with a slightly higher pitch and the other with a slightly lower pitch. For each shift in pitch, the researchers 'biased' the RNN with a context variable. Subsequently, they trained it to store the original and modified songs within its working memory.


Cybersecurity industry analysis: Another recurring vulnerability we must correct

Security tooling is a must-have, but we need to look wider and restore balance to the people component of security defense. Automation is the future. Why should we care about the human element of cybersecurity? Virtually everything in our lives is powered by software, and it’s true that automation is replacing the human elements that were once present in so many industries. It’s a sign of progress in a world digitizing at warp speed, with AI and machine learning hot topics keeping many organizations future-focused. So, why, then, would a human-focused approach to cybersecurity be anything other than an antiquated solution to a technologically advancing problem? The fact that billions of data records have been stolen in breaches in the past year, including the most recent Facebook breach affecting over half a billion accounts, should indicate that we’re not doing enough (or taking the right approach) to make a serious counter-punch against threat actors. Cybersecurity tooling is a much-needed component of cyber defense, and tools will always have a place. Analysts have been absolutely on point in recommending the latest tools in a risk mitigation approach for enterprises, and that will not change.


Researchers Confront Major Hurdle in Quantum Computing

A time crystal is a strange state of matter in which interactions between the particles that make up the crystal can stabilize oscillations of the system in time indefinitely. Imagine a clock that keeps ticking forever; the pendulum of the clock oscillates in time, much like the oscillating time crystal. By implementing a series of electric-field pulses on electrons, the researchers were able to create a state similar to a time crystal. They found that they could then exploit this state to improve the transfer of an electron’s spin state in a chain of semiconductor quantum dots. “Our work takes the first steps toward showing how strange and exotic states of matter, like time crystals, can potentially by used for quantum information processing applications, such as transferring information between qubits,” Nichol says. “We also theoretically show how this scenario can implement other single- and multi-qubit operations that could be used to improve the performance of quantum computers.” Both AQT and time crystals, while different, could be used simultaneously with quantum computing systems to improve performance.


How Ethical Hackers Play An Important Role In Protecting Enterprise Data

Data is an essential asset in the current dynamic setting. The value of data has made big organizations more vulnerable to cyberattacks. But believing that a big company can only suffer from a data breach incident is wrong. In reality, No one is immune to data theft, whether you’re an individual, an SME, a large enterprise, or even a state.  A surer way by which organizations can protect themselves from the possibility of an effective malicious attack is to engage with competent, ethical hackers. It would help if your organization structure had someone who understands how malicious hackers think. In such scenarios, it makes sense to take the help of ethical hackers. Ethical hacking in cybersecurity has its groundwork on data protection. Unlike cybercriminals, ethical hackers operate with the consent of the client. They use the same tools and techniques as malicious attackers. However, cybersecurity and ethical hacking experts intend to protect and secure your network as they can think like the bad guys. They can quickly discover your system vulnerabilities and suggest how you can resolve them before they are exploited.


Microsoft, GPT-3, and the future of OpenAI

There’s a clear line between academic research and commercial product development. In academic AI research, the goal is to push the boundaries of science. This is exactly what GPT-3 did. OpenAI’s researchers showed that with enough parameters and training data, a single deep learning model could perform several tasks without the need for retraining. And they have tested the model on several popular natural language processing benchmarks. But in commercial product development, you’re not running against benchmarks such as GLUE and SQuAD. You must solve a specific problem, solve it ten times better than the incumbents, and be able to run it at scale and in a cost-effective manner. Therefore, if you have a large and expensive deep learning model that can perform ten different tasks at 90 percent accuracy, it’s a great scientific achievement. But when there are already ten lighter neural networks that perform each of those tasks at 99 percent accuracy and a fraction of the price, then your jack-of-all-trades model will not be able to compete in a profit-driven market.


Has DevOps killed the BA/QA/DBA Roles?

As the industry continues towards DevOps and Cloud, these fields will thin out. Each of the roles will trend towards more of a specialization, especially the DBA, since the operational overhead of maintaining a DB is rapidly decreasing. They’ll last longer at big companies, but the tolerance for lower performers will drastically decline. However, simultaneously the demand for data expertise will keep accelerating as shown in the forecast below. Growth in warehousing and data science should ensure data specialization remains lucrative, and DBAs are well-poised to transition. Of the three, the BA role seems safest. The average software developer simply does not have the time (nor often capabilities) to maintain the social network of a strong BA. However, as more companies migrate to DevOps/Agile, the feedback barrier between users and developers will continue to shrink. As it does, BAs that are not technically competent will be pushed out. The QA role is the hardest to predict. As automation improves, demand for QA persons to run manual scripts and “catch bugs” will disappear. 


How to Get a Cybersecurity Job in 2021

There are a bunch of certifications, from CompTIA’s Security+ to others that will help signal your readiness for cybersecurity jobs. Some are more entry-level and require IT competencies such as the A+. But some will require you to have job experience in cybersecurity (such as the CISSP). There’s a bit of a chicken and egg situation and you might wonder – how can you get job experience if you need job experience to get the job in the first place? Adjacent job experience can often make a difference here. Many people transition into cybersecurity from IT roles, such as network administration, system administration, or being on helpdesk for IT, which is an entry-level role. You can gain experience here and transition over. There are also programs tailored for veterans and people with law enforcement backgrounds to get into cybersecurity. Lastly there are many cybersecurity internships being offered to bridge this gap – though with the right backing, training, and the right experience, you can skip ahead to junior-level analyst roles. SOC analyst roles are a good way to break into the cybersecurity industry. Security operations centers need analysts to parse through different threats.


Making A Case For Serverless Machine Learning

The first benefit of serverless machine learning is that it is very scalable. It can stack up to 10k requests at the same time without having to write any additional logic. It doesn’t consume extra time to scale which makes it perfect for handling random high loads. Secondly, with a pay-as-you-go architecture of serverless machine learning a person doesn’t have to pay unused server time. It can save an enormous amount of money. For example, if a user has 50k requests a month, he is obliged to pay only for 50k requests. Thirdly, infrastructure management becomes very easy as a user doesn’t have to hire a special person to look into it, it can be done very easily by a backend developer. For instance, AWS Lambda is one of the most popular serverless cloud services that has these advantages. It lets users run code without managing servers. It obviated the need for developers to explicitly configure, deploy, and manage long-term computing units. Training in Serverless Machine Learning does not require extensive programming knowledge. Basic knowledge of Python, Machine Learning, Linux, and Terminal along with an AWS account is enough to get one started.


How Blockchain Technology Can Benefit the Internet of Things

The distributed aspect of blockchain means that data are replicated across several computers. This fact makes the hacking more challenging since there are now several target devices. The redundancy in storage brought by blockchain technology brings extra security and enhances data access since users in IoT ecosystems can submit to and retrieve their data from different devices, Carvahlo said. Continuing with this example, say the burglar is captured and claims in court that the recorded video is forged evidence. The immutability nature of blockchain technology means that any change to the stored data can be easily detected. Thus, the burglar’s claim can be verified by looking at attempts to tamper with the data, he said. However, the decentralization aspect of blockchain technology can be a major issue when storing data from IoT devices, according to Carvahlo. “Decentralization means that the computers used to store data [in a distributed fashion] might belong to different entities,” he said. “In other words, if not implemented appropriately, there is a risk that users’ sensitive data can now be by default stored by and available to third parties.”


Software Engineering at Google: Practices, Tools, Values, and Culture

The skills required for developing good software are not the same skills that were required (at one point) to mass produce automobiles, etc. We need engineers to respond creatively, and to continually learn, not do one thing over and over. If they don’t have creative freedom, they will not be able to evolve with the industry as it, too, rapidly changes. To foster that creativity, we have to allow people to be human, and to foster a team climate of trust, humility, and respect. Trust to do the right thing. Humility to realize you can’t do it alone and can make mistakes. ... Building with a diverse team is, in our belief, critical to making sure that the needs of a more diverse user base are met. We see that historically: first-generation airbags were terribly dangerous for anyone that wasn’t built like the people on the engineering teams designing those safety systems. Crash test dummies were built for the average man, and the results were bad for women and children, for instance. In other words, we’re not just working to build for everyone, we’re working to build with everyone. It takes a lot of institutional support and local energy to really build multicultural capacity in an organization. We need allies, training, and support structures.



Quote for the day:

"Nothing so conclusively proves a man's ability to lead others as what he does from day to day to lead himself." -- Thomas J. Watson

Daily Tech Digest - June 01, 2021

Microsoft launches first Asia Pacific Public Sector Cyber Security Executive Council

With most technology infrastructure owned and operated by private companies; it is also mission critical that governments form coalitions with leading tech companies to lead effective cyber-defense strategies and safeguard our region against attackers. Dato’ Ts. Dr. Haji Amirudin Abdul Wahab FASc, CEO of CyberSecurity Malaysia shared, “Cybersecurity is an important national agenda that cannot rely solely on the back of IT team. It should be a priority and responsibility of all individuals, as we continue to see cyber-criminal activities rise exponentially with the proliferation of data and digital connectivity. This coalition certainly establish stronger partnerships with industry leaders and practitioners that allow us to fortify our security postures and combat cybercrime.” On the future of the cybersecurity eco-system and role the coalition will play, Ph.D. candidate ChangHee Yun, Principle Researcher of AI/Future Strategy Center, National Information Society Agency Korea added, “the collective intelligence amongst the Asia Pacific nations is paramount to jointly share best practices and strategies that will enable us to resolve cybersecurity challenges at a faster pace, and a more proactive manner. ...”


A look at API prioritisation strategy of ICICI Lombard

Like any other software development, API development and rollout have their own set of challenges. One of the most important challenges is to ensure security and encryption. A robust security framework and periodic security audits of applications are a must in ensuring that not only endpoints of applications are tracked but also there is a sufficient level of encryption and account-level security that is maintained. Detection of vulnerabilities and plugging them is an ongoing affair and needs to be monitored regularly. Data protection is a critical aspect of security that we pay close attention to. According to Nayak, one of the key aspects, where organisations make mistakes, involves the estimation of volumes for integration. Since a lot of the APIs are built keeping the number of users in mind, it becomes extremely important to also estimate user-based rate limits to ensure scalability. “User-based rate limits also help in tracking the number of calls per user and outliers are identified as a part of the security evaluation.


HITRUST explained: One framework to rule them all

To understand how this works, we need to first understand what we mean when we talk about a security framework. This isn't some whiz-bang software tool or hardware appliance; instead, it's a set of policies and procedures meant to improve your organization's cyber security strategies. There are innumerable frameworks available out there, some put out by for-profit companies, some by industry cybersecurity orgs, and some by government agencies. This last category will become important for our discussion: many government regulations that touch on cybersecurity have at their heart prescribed frameworks that companies need to implement in order to be in compliance. HITRUST's framework, known as the HITRUST CSF, works along these same lines. What makes HITRUST special is that it isn't attempting to impose its own unique security philosophy onto its users; rather, it consolidates multiple existing public domain security frameworks into a single document. For instance, plenty of these frameworks require all passwords within an organization to be eight characters or more; therefore, the HITRUST CSF includes an eight-character password requirement for those organizations to which that control applies.


Microsoft's Low-Code Strategy Paints a Target on UIPath and the Other RPA Companies

Microsoft has assembled all of the pieces required by an enterprise to deliver low code solutions. If they execute well on this strategy they are poised to become unassailable in the low-code world. When Microsoft talks about low code, they have a pretty expansive view. The language they use when describing low code encompasses everything from an accountant writing a formula in Excel, to a software engineer using a pre-built connector to pull data from an API, to a consulting firm building a bespoke end-to-end claims management solution for a customer. Microsoft realises that the real challenge with scaling low code is not writing low code applications - it’s deploying and monitoring low code applications. And it is firmly on a trajectory to solving this challenge. ... Microsoft has put together a pretty impressive strategy. I don’t know how much is by design and how much by tactical zigging and zagging but, judging by the dates that the company released each of the pieces in this strategy, it looks like sometime in 2019 someone at Microsoft had a lightbulb moment about how all this should fit together, and they’ve been executing against that strategy ever since.


Are MRI Scans Done By AI Systems Reliable?

Convolutional neural networks are trained to map the measurement directly to an artifact-free image or map from a coarse least-squares reconstruction from the under-sampled measurement to an artifact-free image. The best-performing methods in the fastMRI competition are all trained networks and yield significant improvements over classical methods. Traditional CS methods are trendy in MRI reconstruction, and are used in clinical practice. Untrained networks are also powerful for compressive sensing, and simple convolutional architectures such as the Deep Decoder work well in practice. For the experiments, the researchers picked ten randomly-chosen proton-density-weighted knee MRI scans from the fastMRI validation set. For each of those images, a small perturbation was added to the measurement. The results showed that both trained and untrained methods are sensitive to small adversarial perturbations. For the next experiment to check for dataset shift, the researchers tested on the Stanford dataset retrieved by collecting all available 18 knee volumes. “Our main finding is that all reconstruction methods perform worse on the new MRI samples, but by a similar amount.


A human-centric approach to protect against cybersecurity threats

Teaching and reinforcing positive cyber hygiene among employees is one way in which they can help in defending against cyberattacks. This is the consistent and safe training of employees when they perform a manoeuvre that could compromise important data or open themselves up to a threat. This could be attaching a document with sensitive information to an outside source using a document sharing service or clicking on e-mail without reviewing the source. With practice and consistent guidance, it is possible to train employees with new programmes that help to curb unwanted behaviours, with notifications being made to the employees when one of these incidents is about to occur. The employee can learn in real-time why they cannot or should not perform this action. It can also be a comfort to the employees who know they are protected within this system of alerts, with additional options to anonymise which employee is connected to each incident – in other words, ensuring full visibility while maintaining privacy. With time, these actions will become habits. Human error is always likely to occur, but with incident-based training, employees and companies can better protect themselves from outside risks.


Investing in the Cybersecurity Workforce of Tomorrow

One solution that will help close the skills gap is to seek out and hire underrepresented candidates. However, providing them with the needed educational resources and skill-building opportunities is yet another challenge. Cybersecurity education is not always accessible to these groups, which typically leads them to pursue other career paths. Investing in the preparation of essential talent pools, such as students, is one key component to closing the cybersecurity skills gap. With the crucial need for people with cyber skills, IT recruiters need to consider candidates who don’t fit the traditional mold of a cybersecurity professional. ... Organizations must provide appropriate resources, and candidates must be willing to take advantage of this opportunity. Along with universities that offer cybersecurity curricula, several community organizations recognize the value of diversity in the industry, providing access to content and programs designed to address the talent shortage. ICMCP and WiCyS are two examples of groups that partner with private organizations to create access to different types of training and mentorship programs for women and minorities looking to transition or grow within the field of cybersecurity.


CISO Confidence Is Rising, but Issues Remain

Many CISOs feel they lack boardroom support. Fewer than two-thirds of global CISOs surveyed for the report indicated that they agree with their board's approach to cybersecurity. Fifty-seven percent of them indicated that the expectations placed on their role are excessive. Fifty-nine percent of global CISOs say their reporting line hinders their job effectiveness. This view is most prevalent in the technology sector, where three-quarters of CISOs expressed this sentiment. In the public sector, the issue is less pressing; here, just 38% felt reporting was a burden. The apparent distance between them and their C-suite colleagues makes many CISOs feel they can't do their jobs to the best of their ability. Nearly half of them don't global believe their organization is setting them up to succeed. What's worse, 24% of CISOs strongly agree this is the case. The CISO's ability to trade off agility and security will be even more critical in the future. Now that more organizations know what remote working brings along in terms of cost-savings and flexibility, it's likely that many will adopt hybrid working models going forward. But CISOs will need to convince their boards that the passable approach they used over the past year won't be enough in the long term.


How data centres can help businesses be more sustainable

The first step for many providers is in a move away from fossil fuels. Data centres are particularly well placed to benefit from renewable energy sources due to their stable power consumption. Indeed, some providers are already achieving 100% zero-carbon energy in their buildings, resulting in lower emissions of carbon and other types of pollution, as well as cost efficiencies. Google is another trailblazer in this area – its large-scale procurement of wind and solar power has made Google the world’s largest corporate buyer of renewable energy. Renewable energy is, and will continue to be, an important part of the strategy to reduce carbon emissions, but different global locations will benefit from different approaches, and it’s important to move beyond a straight ‘we must embrace renewables’ message, to one that recognises the nuances of location. For example, in the Middle East and parts of the US, solar energy is much more prevalent than in the Nordics. Other locations have different options: a good example is at a campus on the southwestern tip of Iceland, which runs almost entirely on geothermal and hydroelectric power.


Security leaders more concerned about legal settlements than regulatory fines

Egress CEO Tony Pepper comments: “The financial cost of data breach has always driven discussion around GDPR – and initially, it was thought hefty regulatory fines would do the most damage. But the widely unforeseen consequences of class action lawsuits and independent litigation are now dominating conversation. Organizations can challenge the ICO’s intention to fine to reduce the price tag, and over the last year, the ICO has shown leniency towards pandemic-hit businesses, such as British Airways, letting them off with greatly reduced fines that have been seen by many as merely a slap on the wrist. “With data subjects highly aware of their rights and lawsuits potentially becoming ‘opt-out’ for those affected in future, security leaders are right to be nervous about the financial impacts of litigation.” Lisa Forte, Partner at Red Goat Cyber Security, comments: “The greatest financial risk post breach no longer sits with the regulatory fines that could be issued. Lawsuits are now common place and could equal the writing of a blank cheque if your data is compromised.



Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boye

Daily Tech Digest - May 31, 2021

How The World Is Updating Legislation in the Face Of Persistent AI Advances

Recently, 13 states across the US placed a ban on the use of facial recognition technology by the police. Interestingly, 12 of these 13 cities were democrat-elect, implying the cultural difference within a country itself. The European Union is the gold standard when we talk about data privacy and laws governing the various aspects of technology. To protect individuals’ rights and freedom the article 22 of the GDPR, “Automated individual decision making, including profiling,” has ensured the availability of manual intervention in automated decision making in cases where individual’s rights and freedoms are affected. The first paragraph, “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her,” and the third paragraph, “the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision,” provides the right for manual intervention to individuals.


Why ML Capabilities Of GCP Is Way Ahead Of AWS & Azure

TPUs are Google’s custom-developed application-specific integrated circuits (ASICs) to accelerate ML workloads. A big advantage for GCP is Google’s strong commitment to AI and ML. “The models that used to take weeks to train on GPU or any other hardware can put out in hours with TPU. AWS and Azure do have AI services, but to date, AWS and Azure have nothing to match the performance of the Google TPU,” said Jeevan Pandey, CTO, TelioLabs. ... Google cloud’s open-source contributions, especially in tools like Kubernetes –a portable, extensible, open-source platform for managing containerized workloads and services, facilitating declarative configuration and automation– have worked to their advantage. ... Google cloud’s speech and translate APIs are much more widely used than their counterparts. According to Gartner’s 2021 Magic Quadrant, Google cloud has been named the leader for Cloud AI services. Pre-trained ML models can be instantly used to classify objects in an image into millions of predefined categories. Additionally, one of the top ML services from Google cloud is Vision AI, powered by AutoML.


How Robotic Processing Automation can improve healthcare at scale

RPA isn’t just a boon for patient-facing organizations—healthcare vendors are getting in on the action, too. For example, the company I work for faced the daunting challenge of transferring over 1 million pieces of patient data from one EMR to another. As any medical professional can attest, switching EMRs is a notoriously time-consuming process. So, we invested in RPA to bring efficiency to an otherwise manual and laborious task. In the end, we saved valuable time—and a significant chunk of change. ... One of the biggest contributors to burnout is the ever-increasing administrative work stemming from non-clinical tasks like documentation, insurance authorizations, and scheduling—all things that can be done faster and more accurately with RPA. And when providers are freed from the monotony, they have more time to focus on the parts of the job that they really enjoy. This, in turn, boosts morale and productivity, thus enhancing care delivery and optimizing patient outcomes overall. For those working in health care, the demand for digital solutions like RPA feels like the dawning of the new era—albeit one that is met with mixed emotions.


The many lies about reducing complexity part 2: Cloud

Managers in IT are sensitive to it, as complexity generally is their biggest headache. Hence, in IT, people are in a perennial fight to make the complexity bearable. One method that has been popular for decades has been standardisation and rationalisation of the digital tools we use, a basic “let’s minimise the number of applications we use”. This was actually part 1 of this story: A tale of application rationalisation (not). That story from 2015 explains how many rationalisation efforts were partly lies. (And while we’re at it: enjoy this Dilbert cartoon that is referenced therein.) Most of the time multiple applications were replaced by a single platform (in short: a platform is software that can run other software) and the applications had to be ‘rewritten’ to work ‘inside’ that platform. So you ended up with one extra platform, the same number of applications and generally a few new extra ways of ‘programming’, specific for that platform. That doesn’t mean it is all lies. The new platform is generally dedicated to a certain type of application, which makes programming these applications simpler. But the situation is not as simple as the platform vendors argue. 


Implementing Nanoservices in ASP.NET Core

There is no precise definition of how big or small a microservice should be. Although microservice architecture can address a monolith's shortcomings, each microservice might grow large over a while. Microservice architecture is not suitable for applications of all types. Without proper planning, microservices can grow as large and cumbersome as the monolith they are meant to replace. A nanoservice is a small, self-contained, deployable, testable, and reusable component that breaks down a microservice into smaller pieces. A nanoservice, on the other hand, does not necessarily reflect an entire business function. Since they are smaller than microservices, different teams can work on multiple services at a given point in time. A nanoservice should perform one task only and expose it through an API endpoint. If you need your nanoservices to do more work for you, link them with other nanoservices. Nanoservices are not a replacement for microservices - they complement the shortcomings of microservices.


Five Data Governance Trends for Digital-Driven Business Outcomes in 2021

Knowledge of data-in-context, data processes, best techniques to provision, as well as tools enabling these methods of self-service are crucial to democratize data. However, with technology advancements, including virtualization, self-service discovery catalogs, and data delivery mechanisms, the internal data consumers can shop and provision for data in shorter cycles. In 2020, it took organizations anywhere between a week to three weeks to provision complex data that includes integration from multiple sources. Also, an increase in data awareness will help data consumers explore further available dark data that can provide predictive insights to create new user-stories that can propel customer journeys. ... A lack of focus is common across organizations as they assume Data Governance as an extension of either compliance or a risk function. Data Literacy will, in fact, change the attitude of business owners towards having to actively manage and govern data. There are immediate and cumulative benefits from actively governing data either by defining data or fixing bad quality data. But there is a need for a value-realization framework to actively manage the benefits of Data Management services.


Best practices for securing the CPaaS technology stack

Certifications are certainly important to consider when evaluating options, but even so, certifications don’t mean security. It is a best practice to check on the maturity of these vendor-specific certifications, as some companies go through a process of self-certification that doesn’t necessarily ensure the level of security your organization needs. Sending a thoughtful questionnaire to multiple vendors can be helpful for scoring these vendor’s security, offering a holistic and specific viewpoint to be considered by an organization’s IT team. On the customer end, in-house security and engineering staff can prep for CPaaS implementation by becoming familiar with the use of APIs and the authentication methods, communications protocols and the data that flows to and from them. Hackers routinely perform reconnaissance to find unprotected APIs and exploit them. Once CPaaS is incorporated into the hybrid work model technology stack, it is a best practice for an organization to focus its sights on its endpoint management. The use of a centralized endpoint management system that pushes patches for BIOS, operating systems, and applications is necessary for protecting the cloud network and customer data once a laptop connects.


3 SASE Misconceptions to Consider

Solution architecture is important, and yes, you want to minimize the number of daisy chains to reduce complexity. However, it doesn't mean you cannot have any daisy chains in your solution. In fact, dictating zero daisy chains can have consequences — not for performance, but for security. SASE consolidates a wide array of security technologies into one service, yet each of those technologies is a standalone segment today — with its own industry leaders and laggards. Any buyer who dictates "no daisy chains" is trusting that one single SASE provider can (all by itself) build the best technologies across a constellation of capabilities that is only growing larger. Being beholden to one company is not pragmatic given that the occasional daisy chain greatly increases the ability to unite best-of-breed technologies under one service provider's umbrella. ... SASE revolves around the cloud and is undoubtedly about speed and agility achieved through cloud-deployed security. But SASE doesn't mean the cloud is the only way to go and you should ignore everything else. Instead, IT leaders must take a more practical position, using the best technology given the situation and problem.


Advice for Someone Moving From SRE to Backend Engineering

The work you’re doing as an SRE will partly depend on your company culture. Without a doubt, some organizations will relegate their SREs to driving existing processes like watching the on-call make sure there are no tickets, running deployments, etc. This can make folks feel like they aren’t progressing. However, today there are a lot more things you can do as an SRE than you once could. You used to just have Bash. Now you have many automation opportunities that will hone your programming skills. You can configure Kubernetes and Terraform. There's a bunch of code-oriented tools that you can use. You can orchestrate your stuff in Python. You could also use something Shoreline if you want it, which is “programming for operations,” and allows you to think of the world in terms of control loops, and how you can automate there. DevOps has also increased the Venn diagram overlap between SRE and Backend engineering. Previously, it was engineers using version control and engineers using package managers, which was separate from SREs using deployment systems and SREs using Linux administration tools.


Inspect & Adapt – Digging into Our Foundations of Agility

When we need to change we usually feel a resistance against it. Take the current pandemic for instance. The simple action of wearing a facemask in public has caused indisputable resistance in many of us. Cognitively we understand that there is a benefit to doing so, even if there were long discussions on exactly how beneficial it would be. But emotionally it did not come natural and easy to most. Do you remember how it felt the first time you wore a facemask when entering the supermarket? It was not very pleasant, was it? But even when we are the driver for change we might find resistance against it. New year’s resolutions come to mind again. The majority of new year's resolutions are abandoned come February, even though the desired results have not been achieved. In other words, the resistance to change might sometimes show up late to the party. What might be missing here is endurance and resilience to small throw backs. I believe that we need a thorough understanding in which situation we currently are. This sounds simple and easy. And on a mid-level it is. "We need to come out of the pandemic with a net positive", a director of a company might say.



Quote for the day:

"It's very important in a leadership role not to place your ego at the foreground and not to judge everything in relationship to how your ego is fed." -- Ruth J. Simmons

Daily Tech Digest - May 30, 2021

Wanted: Millions of cybersecurity pros. Rate: Whatever you want

In the United States, there are around 879,000 cybersecurity professionals in the workforce and an unfilled need for another 359,000 workers, according to a 2020 survey by (ISC)2, an international nonprofit that offers cybersecurity training and certification programs. Globally, the gap is even larger at nearly 3.12 million unfilled positions, the group says. Its CEO, Clar Rosso, said she thinks the need may actually be higher, given that some companies put off hiring during the pandemic. The needs range from entry-level security analysts, who monitor network traffic to identify potential bad actors in a system, to executive-level leaders who can articulate to CEOs and board directors the potential financial and reputational risks from cyber attacks. The US Bureau of Labor Statistics projects "information security analyst" will be the 10th fastest growing occupation over the next decade, with an employment growth rate of 31% compared to the 4% average growth rate for all occupations. If demand for cybersecurity professionals in the private sector increases dramatically, some experts say talented workers could leave the government for more lucrative corporate jobs


100 Days To Stronger Cybersecurity For The US Electric Grid

Regardless of company size or ownership status, all organizations that support the BES are required to comply with a set of cybersecurity standards known as the North American Electric Reliability Corporation Critical Infrastructure Protection (NERC-CIP) standards. NERC-CIP defines the reliability requirements for planning, operating and protecting the North American bulk power supply system. It covers everything from identifying and categorizing assets, to implementing physical and digital security controls, to dealing with incidents and recovering from a cyber breach. As any security officer knows, “compliance” does not guarantee “security.” Even if all companies that are part of the BES are fully compliant with NERC-CIP — and that’s a big “if” — it’s still a good idea to have a group of experts examine the security controls and bring them up to date to be able to counter current threats from a variety of adversaries. The DOE’s 100-day plan states that “the initiative modernizes cybersecurity defenses and encourages owners and operators to implement measures or technology that enhance their detection, mitigation, and forensic capabilities” 


Facebook Launches AI That Understands Language Without Labels

In a recent blog post, Facebook revealed its new AI-based speech recognition technology, wav2vec-Unsupervised (or wav2vec-U), which aims to solve the problems posed by transcribing such languages. This is a method by which individuals could build speech recognition systems that do not require transcribed data. The ML algorithm still requires some form of training. Wav2vec-U is trained purely through recorded speech audio and unpaired text. This method entails first learning the structure of the target language’s speech from unlabelled audio. Using wav2vec 2.0, Facebook’s self-supervised speech recognition model, and a k-means clustering algorithm, wav2vec-U segments the voice recording into speech units loosely based on individual sounds. For instance, the word cat would correspond to the sounds: “/K/”, “/AE/”, and “/T/”. This allows it to comprehend the structure of this speech. To recognise the words in an audio recording, Facebook will use a generative adversarial network (GAN) consisting of a generator and a discriminator network. The generator will take each audio segment embedded in self-supervised representations and predict a phoneme


Why cloud governance needs to be an open source affair

Keep in mind that Cloud Custodian emerged from work Thangavelu was doing at Capital One, which is a big company with over 50,000 employees and tens of billions in revenue. It was a laboratory primed to help Thangavelu "service the different needs from different groups within the enterprise: audit, risk, security, application teams, lines of business," he said. That helped make Cloud Custodian incredibly useful within his enterprise. But just one enterprise. Open source increased the scope and utility of Cloud Custodian beyond one company's needs. "As we've gotten to open source, that pool of use cases simply expanded," he noted. No matter how creative your product managers, they're always necessarily constrained by the needs of the business they're running. By contrast, Thangavelu continued, "Open source is the strongest way to achieve [expanded scope] because your usage and your users address a wider swath of needs than any given company has. They represent the needs of a large diverse set of interests. And they're all pulling in different directions." This push-and-pull from a growing Cloud Custodian community has made it a useful tool for organizations that may have thousands or even tens of thousands of diverse policies to manage.


The Emerging Role of Artificial Intelligence in Human Lung Imaging

Recently risen to prominence, robust AI methods outline the onset of the new era in lung image analysis. Adept at seeing and making sense of vital image-led patterns, AI tools help make the respiratory field more effective — improving diagnosis and therapeutic planning, letting pulmonologists spend extra time with patients. Hence, various attempts have been made to develop automated segmentation techniques lately. Yet, the strain on the healthcare and particularly radiology system, following the pandemic, will remain until these AI-based approaches are adopted. A major hurdle of lobe segmentation arises because different respiratory diseases affect the lung architecture in different ways. For example, COVID-19 pneumonitis would manifest on imaging very differently from pulmonary emphysema. For respiratory physicians, accurate lobar segmentations are vital in order to make treatment plans appropriately. Inaccurate lobe segmentation can give misleading information about the disease process, which can lead to erroneous treatment decisions. 


Network Monitoring: The Forgotten Cybersecurity Tool

Networks can be very complex, and many are segmented into VLANs to segregate traffic. What’s more, there are many devices on the network that can shape or route traffic depending on how the network infrastructure has been configured. “Today, networks are highly segmented, yet still interconnected; there are numerous devices, such as content filtering appliances, load balancers and so on, that all work together to shape and control network traffic,” Gridelli said. “Here, active network monitoring can verify whether or not security policies are properly in effect, and detect unauthorized changes to the network infrastructure.” Active network monitoring tools often deploy sensors, which can look into a network and report on what is happening on that network. Administrators can define policies that verify network segmentation, segregation and even the functionality of content filtering devices. By running end-to-end active network monitoring tests, it’s possible to also verify whether certain security policies, such as compliance requirements, are working as intended. Sensors can be installed on protected networks, such as those used for compliance (PCI, HIPAA, etc.)


Graphs as a foundational technology stack: Analytics, AI, and hardware

Interest is expanding as graph data takes on a role in master data management, tracking laundered money, connecting Facebook friends, and powering the search page ranker in a dominant search engine. Panama Papers researchers, NASA engineers, and Fortune 500 leaders: They all use graphs. According to Eifrem, Gartner analysts are seeing explosive growth in demand for graph. Back in 2018, about 5% of Gartner’s inquiries on AI and machine learning were about graphs. In 2019, that jumped to 20%. From 2020 until today, 50% of inquiries are about graphs. AI and machine learning are in extremely high demand, and graph is among the hottest topics in this domain. But the concept dates back to the 18th century, when Leonhard Euler laid the foundation of graph theory. Euler was a Swiss scientist and engineer whose solution to the Seven Bridges of Königsberg problem essentially invented graph theory. What Euler did was to model the bridges and the paths connecting them as nodes and edges in a graph. That formed the basis for many graph algorithms that can tackle real-world problems. Google’s PageRank is probably the best-known graph algorithm, helping score web page authority.


Adam Grant on leadership, emotional intelligence, and the value of thinking like a scientist

One of the things that scares me in a lot of organizations is how attached people become to best practices. They might’ve been the best at the time that you created them. But as the world around you changes, as your culture evolves, what was best five years, 10 years ago may not be what’s most productive today. I think the language of best practices creates this illusion that there’s an end point, that we’ve already reached perfection. And so we don’t need to change anything. What I would love to see more organizations do instead is to strive for better practices, right? To say, “Okay, you know what? No matter how good a practice becomes it can always be improved. And we’re open to trying whatever ideas you have for trying to evolve the way that we do things around here.” ... When you see what other people are feeling, that’s information about what their motivations are, what’s occupying a lot of their energy and attention. Without that information, you’re actually handicapped as a leader.


Not as complex as we thought: Cyberattacks on operational technology are on the rise

The "low-hanging fruit" many attackers are going for are graphical user interfaces (GUI) -- including human machine interfaces (HMI) -- which are, by design, intended to be simple user interfaces for controlling complex industrial processes. As a result, threat actors are able to "modify control variables without prior knowledge of a process," Mandiant says. Another trend of note is hacktivism, propelled by widely available and free tutorials online. Recently, the researchers have seen hacktivist groups bragging in anti-Israel/pro-Palestine social media posts that they have compromised Israeli OT assets in the renewable and mining sectors. Other low-skilled threat actors appear to be focused on notoriety, however, with little knowledge of what they are targeting. In two separate cases, threat actors bragged about hijacking a German rail control system -- only for it to be a command station for model train sets -- and in another, a group claimed they had broken into an Israeli "gas" system, but it was nothing more than a kitchen ventilation system in a restaurant. 


Evolutionary Architecture from an Organizational Perspective

Business and IT must work together to understand the business environment and adapt the architecture accordingly. Only then is the feedback loop between the new customer’s needs and a created solution short enough to evolve architecture in the right direction. The delivery team directly listens to the client’s needs and proposes a solution. Therefore, our architecture evolves naturally with the overall business. There isn’t an additional layer of communication that slows down accommodating the change. When the architecture doesn’t correspond to the business environment, we can remodel architecture much more quickly. Additionally, the delivery team works more closely with the clients. They understand their needs. Based on that, the evolution of the system becomes more business-oriented. We don’t create architecture for the sake of the architecture -- we create a spine for the overall business goal. This idea of empowered teams is shown in detail in the book Empowered by Marty Cagan and Chris Jones. A team is responsible for gathering clients’ needs, discovering the right solution, implementing it, and gathering feedback.



Quote for the day:

"Leaders must know where they are going if they expect others to willingly join them on the journey." -- Kouzes & Posner

Daily Tech Digest - May 29, 2021

TSA’s pipeline cybersecurity directive is just a first step experts say

This new regulation requires that designated pipeline security companies report cybersecurity incidents to the DHS's Cybersecurity and Infrastructure Security Agency (CISA) no later than 12 hours after a cybersecurity incident is identified. The TSA estimates that about 100 companies in the US would fall under the directive's mandates. Pipeline owners and operators must also designate a cybersecurity coordinator who is required to be available to TSA and CISA 24/7 to coordinate cybersecurity practices and address any incidents that arise. Finally, pipeline owners and operators must "review their current activities against TSA's recommendations for pipeline cybersecurity to assess cyber risks, identify any gaps, develop remediation measures, and report the results to TSA and CISA." Although not appearing anywhere in the directive, pipeline companies that fail to meet the security requirements would be subject to financial fines, starting at $7,000 per day, government officials say. ... In its press release announcing the directive, the TSA said "it is also considering follow-on mandatory measures that will further support the pipeline industry in enhancing its cybersecurity and that strengthen the public-private partnership so critical to the cybersecurity of our homeland."


The Limits to Blockchain Scalability

There are two ways to try to scale a blockchain: fundamental technical improvements, and simply increasing the parameters. ... Unfortunately, there are many subtle reasons why this approach is fundamentally flawed. Computers running blockchain nodes cannot spend 100% of CPU power validating the chain; they need a large safety margin to resist unexpected DoS attacks, they need spare capacity for tasks like processing transactions in the mempool, and you don't want running a node on a computer to make that computer unusable for any other applications at the same time. Bandwidth similarly has overhead: a 10 MB/s connection does NOT mean you can have a 10 megabyte block every second! A 1-5 megabyte block every 12 seconds, maybe. And it is the same with storage. Increasing hardware requirements for running a node and limiting node-running to specialized actors is not a solution. For a blockchain to be decentralized, it's crucially important for regular users to be able to run a node, and to have a culture where running nodes is a common activity.


Telcos back Artificial Intelligence, Internet of Things for 5G in India

The drivers that may trigger IoT applications, according to him, include low cost of storage and computing data on the cloud platform, emerging edge computing trends, falling costs of data, sensors, devices, and availability of mobile app development platforms. Following the Covid-19 pandemic, IoT is expected to drive significant transformation in the healthcare sector. "Hospital drug and waste management, robotic surgery, real-time health monitoring and diagnostics via IoT will stand to witness increased adoption." Bharti Airtel is working with the Swedish gear maker Ericsson on aerial drones for security and surveillance purposes, and dropping of relief material in emergency situations. Billionaire Mukesh Ambani-owned Reliance Jio together with Korean Samsung Networks has been working on virtual classrooms, and previously demonstrated high-definition content streaming. Kochhar feels that bringing futuristic technologies such as AR and VR to classrooms may redefine education and skilling of students. "AR and VR require higher bandwidth, lower latency and network resiliency. ..." 


Implementing a digital transformation at industrial companies

Before pursuing digital opportunities, leaders must first develop and align on a digital vision for their organization, looking at both the overall digital strategy and value proposition for their companies. They should begin by assessing their capabilities, estimating the resources required, and contemplating potential partnerships that could help them achieve their goals. Other practical issues include the feasibility of the proposed initiatives and their potential value. The basic question underlying all strategic plans is this: How can digital help us transform core business processes or generate new opportunities? When developing the road map, industrial companies should consider the strategic implications for the incumbent business, including disruptions to any offline distribution channels as digital sales grow. Companies should also address the inevitable channel conflicts in the strategic road map by acknowledging the risks, evaluating the potential impact, and creating a path forward to mitigate any issues. For instance, companies should determine what roles they expect the distributors to play with the new digital channels. Some may decide to eliminate distributors and conduct all business through e-commerce while others may keep offline and online channels. 


Can You Build a Machine Learning Model to Monitor Another Model?

Can you train a machine learning model to predict your model’s mistakes? Nothing stops you from trying. But chances are, you are better off without it. We’ve seen this idea suggested more than once. It sounds reasonable on the surface. Machine learning models make mistakes. Let us take these mistakes and train another model to predict the missteps of the first one! Sort of a “trust detector,” based on learnings from how our model did in the past. ... In regression problems, sometimes you can build a “watchdog” model. This happens when your original model optimizes the prediction error, taking into account its sign. If the second “watchdog” model is predicting an absolute error instead, it might get something more out of the dataset. But here is a thing: if it works, this does not tell that the model is “wrong” or how to correct it. Instead, it is an indirect way to evaluate the uncertainty of data inputs. (Here is a whole paper that explores this in detail). In practice, this returns us to the same alternative solution. Instead of training the second model, let’s check if the input data belongs to the same distributions!


4 robotic process automation project fails to avoid

Many organizations select "low-hanging-fruit" RPA initiatives without a true analysis of their workflows and how those affect other processes. Most businesses are stumped by a deceptively simple question: Which are the right processes for automation? Determining where to start with your RPA program is critical to success. Using advanced process mining and discovery tools to do a thorough analysis of your business processes will give you a "digital twin" of how they currently work and let you know which are best suited for digital transformation. ... RPA on its own cannot understand unstructured documents, so you need AI-enabled bots with content intelligence. In this way, bots can carry out tasks such as reading a document; categorizing, routing, extracting, and validating data from it; and doing other tasks related to understanding and processing unstructured content. Using content intelligence with RPA can speed your processes and ready your organization to add more experiential opportunities to engage with customers via interactive mobile apps, cognitive virtual assistants that combine voice and conversational AI, and chatbots.


Blue Prism 7 shifts focus from RPA to programmable digital workers

“Scaling intelligent automation within the cloud and enabling increased demand will be the ultimate differentiator in a year of significant growth for the market,” Blue Prism CEO and chair Jason Kingdon told VentureBeat. While other RPA vendors have aimed to improve the technical characteristics of RPA infrastructure, Blue Prism has focused on improving the programmability, manageability, and integration of RPA infrastructure. Technical infrastructure efforts are important, as RPA’s original focus on making it easier to simulate user interaction with applications often incurred infrastructure scaling liabilities. Focus is shifting, however, as the major RPA vendors explore different approaches to scaling people’s ability to quickly create new automations with appropriate guardrails. That is key to Blue Prism’s recent efforts. A pointed criticism of traditional approaches to RPA — despite what the name implies — has centered around their focus on automating tasks rather than processes. “We looked at how to automate the process of programming not just tasks, but an entire digital workforce end-to-end, and that guided our redesign of Blue Prism’s platform for V7,” Kingdon said.


SolarWinds hackers resurface to attack government agencies and think tanks

The group behind the infamous SolarWinds hacks is on another cyberattack spree, this time targeting not just government agencies but others as well. In a report published Thursday, Microsoft revealed that the threat actor Nobelium launched a series of attacks this past week against government agencies, think tanks, consultants, and non-governmental organizations. More than 25% of the victims were involved in international development, humanitarian and human rights work, according to Microsoft. Affecting more than 150 different organizations, the attacks targeted 3,000 separate email accounts. Many of the attacks were blocked automatically by security software, with Microsoft's Windows Defender catching the malware used to try to compromise the organizations. Identifying the culprit as Nobelium, Microsoft pointed out that this is the same group behind the SolarWinds hack in 2020. Those attacks, which exploited a security hole in a SolarWinds monitoring tool, hit different government agencies and were deemed to be sponsored by Russia. Microsoft called the latest incident a continuation of different information gathering efforts by Nobelium to target government agencies involved in foreign policy.


Surviving Automation: It's Now Coming for White-Collar Workers

Adapting and expanding one’s skillset is one tactic for avoiding redundancy via automation. “Seek out any training available, either internally — many firms provide internal training — or via well-regarded sites such as Coursera, Data Camp, etc.,” Coker of the University of Westminster suggested. Pay attention to news and developments in your field, he said, and keep your own skills up to date accordingly. Also, the tools that allow automation to happen must be created, Edge pointed out. That involves software developers, coders, UI/UX professionals, yes — but it also requires expertise from those with deep experience in a given field. One of the best ways of surviving automation in your field is to find a way to get in front of the people designing automation software in order to help them do their jobs better, Edge said. “That requires a little understanding of how software works — but more importantly, to move into those product design roles, we need to think more deeply about what we do and why.” Additionally, as companies invest more in digital transformation, there will be increased demand for professionals with experience in what that looks like in their particular industries.


Building a better you

A healthy dose of common sense — and humanity — helps when making changes. Grit and persistence only go so far. Milkman advises that “when you keep hitting a wall on a particular goal, it’s time to step back, reassess, and think about the bigger picture instead of making yourself miserable.” Don’t overengineer the solutions, either. For example, although we know that forming stable routines is key to habit formation, you must build in sufficient buffers for life events or hiccups that may make it difficult to follow your plan. Otherwise, you’ll end up disappointed and less likely to sustain your new behaviors. In one experiment, those who were rewarded for exercising on a more flexible schedule kept working out a lot more at other times, too — not only at the time they’d said was most convenient. In this instance, a more flexible approach wound up embedding a new behavior. The others, who had agreed to exercise at a fixed time and day, Milkman writes, transformed from “Routine Rachels” into “Rigid Rachels.” That is, when events made it impossible to exercise at the regular time, they didn’t compensate by exercising at other times.



Quote for the day:

"People buy into the leader before they buy into the vision." -- John C. Maxwell

Daily Tech Digest - May 28, 2021

What is a Data Lake? It is not a Data Swamp

A data lake is a place for storing large amounts of data that originate from various sources and are stored in their raw form. Important is, the heterogeneous data is neither cleaned nor transformed before the loading process. After the loading process is complete, the data is now available in a single system. In addition to structured data, a data lake also stores and manages semi-structured (CSV, logs, JSON), unstructured (e-mails, documents, reports), and binary data (video, audio, images). The list of all possible formats is of course incomplete, but I think you know what I mean. The goal is to gather all company data in one place in order to be able to quickly access the entire data stock. Users should be able to immediately create visualizations, reports, and analyses from the data. ... In order for the principle of the data lake to work efficiently for you and not result in a data swamp in which no more data can be found, the collected data must show a business added value for the future. It is very difficult for analysts to extract information from the volume of data. This is especially true when there is no metadata or tags used. Without this, it is hardly possible for analysts to assign the data.


How to Get Developer and Security Teams Aligned

The concept of policies replacing security standards builds on the idea of culture shifts. Security standards are typically just a piece of documentation saved on Confluence or GSuite somewhere. They may get examined by a developer during a mandatory annual training session, or occasionally for reference, but they aren’t dynamic and are rarely top of mind. Those responsible for enforcing such standards are normally compliance or security operations specialists, who are logically distanced from developers. Aside from low adoption rates and disruptions to Agile workflows, security standards often lead to the ‘enforcer’ becoming the bad guy. This pushes even more of a wedge between dev and security, making security feel a bit like doing your taxes (and no one wants that). If the expertise of the traditional ‘enforcer’ is shared with developers and dynamic, adaptable policies are adopted in place of rigid standards, then security simply becomes part of the workflow. Zero-trust networking is a great example of this. Zero-trust networking is probably the best way to secure your infrastructure, and it relies on expertly defined and managed policies being present through each of its 10 principles.

The evolution of the chief information officer (CIO)

Future success relies on leaders’ digital ability as much as their aptitude for uniting teams and encouraging people to embrace new technology and new ways of working at every level of the organisation. Without the leadership to make new systems and processes work and deliver against business objectives, outlay in innovation quickly becomes a source of future technical debt. Developing leadership behaviours including emotional intelligence (EQ) will help CIOs to build empathy; understanding the human impact of transition ensures teams feel heard and valued, reduce resistance to change and enables CIOs to build trust. Equally, strong communication skills will enable CIOs to speak in both the languages of data and business and use storytelling to share their vision and secure buy-in from teams and shareholders. CIOs are also protectorates. With greater concern over business threats, CIOs safeguard their organisations’ assets and future. As well as managing data governance and cyber security, they can add business value by anticipating the opportunities and risks presented by disruption.

It’s time to shift from verifying data to authenticating identity

One of the more complex vishing schemes is the man-in-the-middle attack, in which a fraudster sets up two parallel conversations between a business and its customer. The business believes it is connecting with the customer, and the customer thinks they are talking to the business — but in reality, it is the fraudster interacting with both. The fraudster might initiate the scheme by requesting the issuance of a one-time passcode via a session on the business’s website. In parallel, posing as the business, the fraudster calls the unwitting customer and, using social engineering, convinces the individual to read off the one-time passcode sent by the business. The fraudster then uses this information to log in to the customer’s account and perform unauthorized transactions. Since the fraudster was able to provide all requested data to pass each point in the verification process, access is granted. With synthetic identity fraud, criminals combine real and fake information to create a fictitious identity, which they use to open up financial accounts and make fraudulent purchases. While a false identity might seem easy to spot, the reality is much more challenging.

Serverless Computing Brings New Security Risks

Given the distributed nature of serverless functions – essentially, the reason for its flexibility and scalability – many existing security tools will provide little to no visibility, nor the control capabilities, for these computing environments. Many of the security attacks that will occur in serverless functions will be a result of misconfigurations and mistakes that happen outside the purview of the security team and due to legacy solutions which don’t translate to serverless architectures. Further, because abstracted workloads create blind spots, attackers will have more room to maneuver undetected. Serverless functions will even render some traditional DevSecOps tools less useful. Scanning tools must monitor hundreds of individual repositories instead of a single monolithic repository, while application performance monitoring (APM) tools lack security proficiency and cannot protect from the OWASP Serverless Top 10 risks. ... For many organizations, serverless architecture is a very different and unique computing environment – unlike anything they’ve experienced or had to protect before now. That reality means that organizations need a fresh approach to securing these environments and will need to look beyond the traditional tools they have in their tech stack today.


Center for Internet Security: 18 security controls you need

The Center for Internet Security has updated its set of safeguards for warding off the five most common types of attacks facing enterprise networks—web-application hacking, insider and privilege misuse, malware, ransomware, and targeted intrusions. In issuing its CIS Controls V8 this month, the organization sought to present practical and specific actions businesses can take to protect their networks and data. These range from making an inventory of enterprise assets to account management to auditing logs. In part the new version was needed to address changes to how businesses operate since V7 was issued three years ago, and those changes guided the work. “Movement to cloud-based computing, virtualization, mobility, outsourcing, work-from-home, and changing attacker tactics have been central in every discussion,” the new controls document says. CIS changed the format of the controls a bit, describing actions that should be taken to address threats and weaknesses without saying who should perform those tasks. That put the focus on the tasks without tying them to specific teams within the enterprise. The controls each come with detailed procedures for implementing them along with links to related resource.


Advantages of Cloud Computing In Banking Can’t Be Ignored

The key to successful digital banking transformation includes embracing the cloud. While there have been reservations in the past around cloud security and regulation, cloud computing solutions are becoming prevalent in the marketplace for both traditional and non-traditional financial institutions. The use of data and deployment of advanced analytics, machine learning, and artificial intelligence requires more processing power than all but the largest financial institutions posses. The good news is that there are several cloud-based solution providers, like IBM, that have created industry-specific solutions for the banking industry. According to IBM, “Organizations have an enormous opportunity to leverage cloud computing to drive innovation and improve their competitive position. Cloud computing – whether private, hybrid or public – enables organizations to be far more agile while reducing IT costs and operational expenses. In addition, cloud models enable organizations to embrace the digital transformation necessary to remain competitive in the future.”


The cybersecurity industry is guarding against all the wrong threats

Smart technology itself, increasingly being deployed across government and private sector systems, may soon create new webs of vulnerability, according to a number of leading cybersecurity researchers. The problem is twofold: Hackers will ultimately begin using artificial intelligence against systems and there is concern that an inability to quickly spot flaws in machine learning models could create even more vulnerabilities. It would be naïve to believe that criminal hackers, who have already built help desk support operations and a vast marketplace for “plug and play” intrusion tools, would not find a way to use AI for attacks. “My guess is this isn’t very far off, and we had better start thinking about its implications,” said security technologist Bruce Schneier. “As AI systems get more capable, society will cede more and more important decisions to them, which means that hacks of those systems will become more damaging.” There is also concern within the cybersecurity community that growing use of machine learning could be opening new avenues of exploit for threat actors. Adi Shamir, professor at the Weizmann Institute in Rehovot, Israel, and a co-founder of RSA, has been analyzing the fragile state of neural networks and recently published a paper on his findings.


AIOps Has a Data(Ops) Problem

There are two issues with data collection. The first is proper instrumentation. It sounds easier than it is. The entire observability, monitoring, and AIOps eco-system depend on properly instrumenting your observable sources. If your systems, devices, services, and infrastructure is not properly instrumented, then you will have data blind spots. No matter how much data you collect from certain areas, if you do not have a holistic view of all the telemetry components, you will be getting a partial view of any system. Obviously, the instrumentation depends mostly on developers. The second issue is integration. As any AIOps vendor will tell you, this probably is the most difficult part to get your AIOps solution going. The more input from varying telemetry sources, the better the insights will be. Any good AIOps solution will be able to integrate easily with the basic golden telemetry – logs, metrics, and traces. In addition, integrating with notification systems (such as OpsGenie, Pagerduty, etc.), and maybe event streams (such as Kafka, etc.) is useful as well. However, I quite often see major enterprises struggling a lot to integrate the AIOps solutions with their existing enterprise systems. 


Agile Transformation: an Integral Approach

An integral approach incorporates all of the essential perspectives, schools of thought, and methods into a unified, comprehensive and accurate framework" is a simple definition from the book. The main leverage of Integral Theory is that it provides a meta-framework for mapping other techniques, approaches, and frameworks onto. The fundamental premise of integral thinking is that any school of thought or method that has been around for any length of time must have some truth to it -- "all perspectives are true, but partial" is a Wilber quote. Integral helps us take multiple perspectives on situations, which is key for change and adaptability in a complex world, instead of getting stuck in our own, limited perspective. As Ken Wilber said to us when we interviewed him for the book -- there are two things that, above all else, make real transformation possible -- the ability to take the perspective of others, and the ability to see one’s own "seer". Both of these are fostered by using integral thinking. Doing this cuts through our confusion when we run into the challenges of existing culture and leadership mindsets when implementing agile.



Quote for the day:

"Open Leadership: the act of engaging others to influence and execute a coordinated and harmonious conclusion." -- Dan Pontefract