Daily Tech Digest - August 20, 2020

11 penetration testing tools the pros use

Formerly known as BackTrack Linux and maintained by the good folks at Offensive Security (OffSec, the same folks who run the OSCP certification), Kali is optimized in every way for offensive use as a penetration tester. While you can run Kali on its own hardware, it's far more common to see pentesters using Kali virtual machines on OS X or Windows. Kali ships with most of the tools mentioned here and is the default pentesting operating system for most use cases. Be warned, though--Kali is optimized for offense, not defense, and is easily exploited in turn. Don't keep your super-duper extra secret files in your Kali VM. ... Why exploit when you can meta-sploit? This appropriately named meta-software is like a crossbow: Aim at your target, pick your exploit, select a payload, and fire. Indispensable for most pentesters, metasploit automates vast amounts of previously tedious effort and is truly "the world's most used penetration testing framework," as its website trumpets. An open-source project with commercial support from Rapid7, Metasploit is a must-have for defenders to secure their systems from attackers.


The Role of Business Analysts in Agile

A few things that we as BA Managers need to be aware of include: Understanding of the role - because of a BA’s ability to be a flexible, helpful and an overall "fill-in-the-gaps" person, the role of the BA gets blurrier and blurrier. This is what makes it interesting and also so great when it comes to working within an agile team. Ultimately it also makes it complicated to explain to others, especially those unfamiliar with the role. If it is complicated to explain, it is easy for people to underestimate the value it brings so make sure you are clear in your "pitch" of what your BAs do! Being pigeonholed into the role - if you are a great BA, nobody wants to lose you so they will continue giving you BA work even if you want to go into something else like project management. It is key for those managing BAs to actively support their career aspirations even if they are outside of the discipline, and to lobby on their behalf. Hitting an analysis complexity "ceiling" - if you are constantly with your team and helping them solve delivery problems, it is very hard to dedicate focused analysis time on upcoming large initiatives.


Cisco bug warning: Critical static password flaw in network appliances needs patching

The flaws reside in the Cisco Discovery Protocol, a Layer 2 or data link layer protocol in the Open Systems Interconnection (OSI) networking model. "An attacker could exploit these vulnerabilities by sending a malicious Cisco Discovery Protocol packet to the targeted IP camera," explains Cisco in the advisory for the flaws CVE-2020-3506 and CVE-2020-3507. "A successful exploit could allow the attacker to execute code on the affected IP camera or cause it to reload unexpectedly, resulting in a denial-of-service (DoS) condition." The Cisco cameras are vulnerable if they are running a firmware version earlier than 1.0.9-4 and have the Cisco Discovery Protocol enabled. Again, customers need to apply Cisco's update to protect the model because there's no workaround. This bug was reported to Cisco by Qian Chen of Qihoo 360 Nirvan Team. However, Cisco notes it is not aware of any malicious activity using this vulnerability.  The second high-severity advisory concerns a privilege-escalation flaw affecting the Cisco Smart Software Manager On-Prem or SSM On-Prem. It's tracked as CVE-2020-3443 and has a severity score of 8.8 out of 10.


Fuzzing Services Help Push Technology into DevOps Pipeline

"Fuzzing by its very nature is this idea of automated continuous testing," he says. "There is not a lot of human input that is necessary to gain the benefits of fuzz testing in your environment. It's a good fit from the idea of automation and continuous testing, along with this idea of continuous development." Many companies are aiming to create agile software development processes, such as DevOps. Because this change often takes many iterative cycles, advanced testing methods are not usually given high priority. Fuzz testing, the automated process of submitting randomized or crafted inputs into the application, is one of these more complex techniques. Even within the pantheon of security technologies, fuzzing is often among the last adopted. Yet, 2020 may be the year that changes. Major providers and even frameworks have focused on making fuzzing easier, says David Haynes, a product security engineer at Cloudflare. "I think we are just getting started in terms of seeing fuzzing becoming a bit more mainstream, because the biggest factor hindering (its adoption) was available tooling," he says. "People accept that integration testing is needed, unit testing is needed, end-to-end testing is needed, and now, that fuzz testing is needed."


Why We Need Lens as a Kubernetes IDE

The current version of Lens vastly improves quality of life for developers and operators managing multiple clusters. It installs on Linux, Mac or Windows desktops, and lets you switch from cluster to cluster with a single click, providing metrics, organizing and exposing the state of everything running in the cluster, and letting you edit and apply changes quickly and with assurance. Lens can hide all the ephemeral complexity of setting up cluster access. It lets you add clusters manually by browsing to their kubeconfigs, and can automatically discover kubeconfig files on your local machine. You can manage local or remote clusters of virtually any flavor, on any infrastructure or cloud. You can also organize clusters into workgroups any way you like and interact with these subsets. This capability is great for DevOps and SREs managing dozens or hundreds of clusters or just helping to manage cluster sprawl. Lens installs whatever version of kubectl is required to manage each cluster, eliminating the need to manage multiple versions directly. It works entirely within the constraints each cluster’s role-based access control (RBAC) imposes on identity, so Lens users (and teams of users) can see and interact only with permitted resources.


Computer scientists create benchmarks to advance quantum computer performance

The computer scientists created a family of benchmark quantum circuits with known optimal depths or sizes. In computer design, the smaller the circuit depth, the faster a computation can be completed. Smaller circuits also imply more computation can be packed into the existing quantum computer. Quantum computer designers could use these benchmarks to improve design tools that could then find the best circuit design. “We believe in the ‘measure, then improve’ methodology,” said lead researcher Jason Cong, a Distinguished Chancellor’s Professor of Computer Science at UCLA Samueli School of Engineering. “Now that we have revealed the large optimality gap, we are on the way to develop better quantum compilation tools, and we hope the entire quantum research community will as well.” Cong and graduate student Daniel (Bochen) Tan tested their benchmarks in four of the most used quantum compilation tools. Tan and Cong have made the benchmarks, named QUEKO, open source and available on the software repository GitHub.


Starting strong when building your microservices team

We’re used to hearing the slogan ‘Go big or go home’, but businesses would do well to think small when developing microservices. Here, developing manageable and reusable components will enable companies, partners and customers to use individual microservices across an entire landscape of applications and industries. In doing so, businesses aren’t restricting themselves to siloed applications. In addition, driving success with microservices involves considerable planning to ensure that nothing is left out. After all, microservices-based architecture consists of many moving parts and so developers should be mindful to guarantee service interactions are seamless from start to finish. The pandemic has shone a spotlight on the role of digital transformation in building up crisis resilience. Consequently, businesses are turning en masse to digital and the market is evolving apace. However, as operational and business models shift, companies must be mindful to avoid becoming locked-in to cloud vendor technologies and platforms in such a rapidly changing market. When working with a cloud partner, implementing their platform and other solutions shouldn’t be a given – while such tools will likely work fine in their own cloud environment, companies should be wary of how they will operate elsewhere.


From Legacy to Intelligent ERP: A Blueprint for Digital Transformation

Today’s ERP configuration is for running today’s business. Most run in the data center and capture, manage, and report on all core business transactions. Tomorrow’s intelligent ERP goes far beyond this charter. If you want to be part of the team transforming the business, then you should understand the vision of where the company is targeting growth over the next several years. What markets, products, and services are the priorities? What operations need to scale? What improvements in workflows can free up cash or make financial forecasting more reliable? How can you empower employees, teams, and departments to work efficiently, safely, and effectively as some people return to the office and others work remotely? Intelligent ERPs not only centralize operational workflows and data from sales, marketing, finance, and operations. These RPS also extend data capture, workflow, and analytics around prospects and customers and their experiences interacting with the business. When fully implemented, they enable a full 360-degree view of the customer across all areas of the company that interface with them from marketing to sales, through digital commerce, and from any customer support activities.


Researchers improve perception of robots with new hearing capabilities

Working out of the Robotics Institute at Carnegie Mellon University, Pinto, as well as fellow researchers Dhiraj Gandhi and Abhinav Gupta, presented their findings during the virtual Robotics: Science and Systems conference last month. The three started the project last June, according to a release from the university. "We present three key contributions in this paper: (a) we create the largest sound-action-vision robotics dataset; (b) we demonstrate that we can perform fine grained object recognition using only sound; and (c) we show that sound is indicative of action, both for post-interaction prediction, and pre-interaction forward modeling," they write in the study. "In some domains like forward model learning, we show that sound in fact provides more information than can be obtained from visual information alone." In the published study, the three researchers said sounds did help a robot differentiate between objects and predict the physical properties of new objects. They also found that hearing helped robots determine what type of action caused a particular sound. Robots using sound capabilities were able to successfully classify objects 76% of the time, according to Pinto and the study.


Running Axon Server in Docker and Kubernetes

“Breaking down the monolith” is the new motto, as we finally get driven home the message that gluttony is also a sin in application land. If we want to be able to change in step with our market, we need to increase our deployment speed, and just tacking on small incremental changes has proven to be a losing game. No, we need to reduce interdependencies, which ultimately also means we need to accept that too much intelligence in the interconnection layer worsens the problem rather than solving it, as it sprinkles business logic all over the architecture and keeps creating new dependencies. Martin Fowler phrased it as “Smart endpoints and dumb pipes”, and as we do this, we increase application components’ autonomy and you’ll notice the individual pieces can finally start to shrink. Microservices architecture is a consequence of an increasing drive towards business agility, and woe to those who try to reverse that relationship. Imposing Netflix’s architecture on your organization to kick-start a drive for Agile development can easily destroy your business.



Quote for the day:

"Leadership is like beauty; it's hard to define, but you know it when you see it." -- Warren Bennis

Daily Tech Digest - August 19, 2020

Why Board Directors And CEOs Must Become AI Literate To Lead Forward

Unfortunately, many companies have been lured into AI programs with black box AI practices, meaning clear accountabilities are not easily evident, transparent, let alone audited to manage risk. Board directors and CEOs know where their employees are located, whether they are working remotely or in an office, who to contact for customer service or personal issues. Yet, I don’t know of one global company where a board director or a CEO can produce in less than five minutes, a comprehensive list of all their AI algo/AI model assets across their enterprise operations and know the last revision model date, and have robust risk classification evidence, verified by third party-auditors. With the democratization of data which is the foundation of AI enablement, AI and machine learning (ML) KPI’s must be elevated to have more importance like our Financial KPIs, deriving increased transparency, like auditors have been disciplined with fiduciary accountability of profit and loss statements.... Few companies have mature AI centers of excellence where machine learning operations (MLOps) is a competency center, although many companies are now starting to invest in ML Ops.


Look Upstream to Solve Your Team's Reliability Issues

Dan believes that one of the most important steps in upstream thinking isn’t system related. They’re human. As people will be the ones solving these issues, we are the first piece to the puzzle, and the most crucial. There’s a way to do this well. Dan notes that you should try to“...surround the problem with the right people; give them early notice of that problem, and align their efforts toward preventing specific instances of that problem". For example, you might be bogged down with incidents and unable to tackle the action items stemming from incident retrospectives and operational reviews. These action items sit in the backlog and are not planned for any sprints. To change this, you’ll need to get buy-in from many stakeholders. You’ll need engineers, managers, product teams, and the VP of engineering on board. “Once you’ve surrounded the problem, then you need to organize all those people’s efforts. And you need an aim that’s compelling and important — a shared goal that keeps them contributing even in stressful situations,” Dan says. Once your team is ready to embark on this journey upstream, you’ll need to work on actually changing the system. 



Speed up your home office: How to optimize your network for remote work and learning

Until recently, home internet providers have rarely spent much time discussing the upload bandwidth they allocate to each customer. ... Working from home, getting that upload bandwidth has been problematic. The various broadband reps I've spoken to over the years have told me that very few people ever even ask about upload bandwidth, which is why ISP's have never offered much capacity. Of course, because of COVID-19, all that is changing rapidly. Before COVID, most users were surfing the web, watching YouTube or Netflix, or playing games. Little upload capacity was needed. Now everybody's on Zoom all the time. When you're on Zoom, you need broadband capacity to send video upstream just as much as you need broadband capacity to watch video. ... the really big issue you should be concerned about is upload capacity when it comes to online learning and work-based video conferencing. I know families of six where the two adults and four kids all used to go to either the office or school -- and who are now all at home, and who all need to be in Zoom conferences at the same time. As the following chart shows, it doesn't matter that you have 100Mbps down, according to your plan, if all you have is 5Mbps up. With 5Mbps up, you can -- barely -- sustain one Zoom stream.


Exclusive: 5 principles of creative disruption

Whatever line of innovation you’re in, there’s a tip that comes in handy, time and time again: sell the problem you solve, not the product. Think of your most-used mobile app (Google Maps, WhatsApp, Tinder perhaps) and, odds are, it transformed something that you found to be boring, awkward or time-consuming into a much better experience. For ActiveQuote, the idea was sparked by an irritating problem. There was nothing in health insurance that compared to what consumers were able to do with car insurance. “Deep frustration can become an entrepreneur’s inspiration,” says Theo. ... Ten years from now, no insurtech wants to be described as ‘very 2020s’. Keeping the customer journey fresh is just as important as maintaining the quality of the product. It’s something that ActiveQuote is keen to stay on top of. “A key challenge to address with any kind of online quote or application journey is customer drop-off,” says Jones. “Traditional, form-based application pages are fine for desktops. However, with the increase in mobile usage, a mobile-specific journey was imperative, especially with complex products such as health insurance.”


FinTech Leaders Should Embrace This Two-Pronged Strategy to Survive the Pandemic

More broadly, FinTech leaders can identify fertile ground by asking, “What processes can my company automate within the financial services industry that previously were managed by humans?” Other opportunities for innovation uncovered by the pandemic include technology solutions pertaining to safety, fraud, and remote banking and payments. The majority of the technological changes spurred by the pandemic will not be reversed, even after the virus is long-gone. Therefore, FinTech leaders should think beyond solutions that are just stop-gaps. The second prong of your survival strategy should be adopting a business model that will generate predictable revenue over the long-term. Rather than depending exclusively on one-time sales, delivery execution and post-support, financial technology providers should be positioning for the coveted, ever-present and predictable recurring revenue model. Subscription-based, service bureau, and software or platform-as-a-service offerings are clearly leading against capital expenditures and premise-based investments. Rapidly proliferating, these types of solutions are driving the technological transition of a wide spectrum of enterprise-level B2B and B2C organizations. 


Getting Started - AI Image Classification With TensorFlow.js

TensorFlow.js offers surprisingly good performance because it uses WebGL (a JavaScript graphics API) and thus is hardware-accelerated. To get even more improved performance, you can use tfjs-node (the Node.js version of TensorFlow). TensorFlow.js allows you to load pre-trained models right in your browser. If you have trained a TensorFlow or Keras model offline in Python, you can save it to a location available to the web and load it into the browser for inference. You can also use different libraries to include features such as image classification and pose detection without having to train your model from scratch. We will see an example of such a scenario later in the series. TensorFlow.js also allows you to use transfer learning to elevate an existing model using a small amount of data collected in the browser. This allows us to train an accurate model quickly and efficiently. We will also see an example of transfer learning in action later in the series. At the most basic level, you can use TensorFlow.js to define, train, and run models entirely in the browser. As mentioned earlier, using TensorFlow.js means that you can create and run AI models in a static HTML document with no installation required. At the time of writing,


Digital Strategy In A Time Of Crisis

The priority is to protect employees and ensure business continuity. To achieve this, it is essential to continue adapting the IT infrastructure needed for massive remote working and to continue the deployment of the collaborative digital systems. Beyond these new challenges, the increased risks related to cybersecurity and the maintenance of IT assets, particularly the application base, require vigilance. After responding to the emergency, the project portfolio and the technological agenda must be rethought. This may involve postponing or freezing projects that do not create short-term value in the new context. Conversely, it is necessary to strengthen transformation efforts capable of increasing agility and resilience, in terms of cybersecurity, advanced data analysis tools, planning, or even optimisation of the supply chain. value. The third major line of action in this crucial period of transition is to tighten human resources management, focusing on the large-scale deployment of agile methods, the development of sensitive expertise such as data science, artificial intelligence or cybersecurity. The war for talent will re-emerge in force when the recovery comes, and it is therefore important to strengthen the attractiveness of the company.


The Attack That Broke Twitter Is Hitting Dozens of Companies

A security staffer at one targeted organization who asked that WIRED not use his name or identify his employer described a more wholesale approach: At least three callers appeared to be working their way through the company directory, trying hundreds of employees over just a 24-hour period. The organization wasn't breached, the staffer said, thanks to a warning that the company had received from another target of the same hacking campaign and passed on to its staff prior to the hacking attempts. "They just keep trying. It's a numbers game," he says. "If we hadn’t had a day or two's notice, it could have been a different story." Phone-based phishing is hardly a new practice for hackers. But until recently, investigators like Allen and Nixon say, the attacks have focused on phone carriers, largely in service of so-called "SIM swap" attacks in which a hacker would convince a telecom employee to transfer a victim's phone service to a SIM card in their possession. They'd use that phone number to intercept two-factor authentication codes, or as a starting point to reset the passwords to cryptocurrency exchange accounts.


IoT governance: how to deal with the compliance and security challenges

“A good way to deal with IoT governance is to have a board as a governance structure. Proposals are presented to the board, which is normally made up of 6-12 individuals who discuss the merits of any new proposal or change. They may monitor ongoing risks like software vulnerabilities by receiving periodic vulnerability reports that include trends or metrics on vulnerabilities. Some boards have a lot of authority, while others may act as an advisory function to an executive or a decision maker,” Wagner advises. ... Instead of focusing on “beefing up” data security, organisation’s should prioritise data privacy in any governance program. She explains that at “the heart of IoT is the concept of the always-connected customer. Organisations are looking to capture, share and use the large volumes of customer data generated to drive a competitive edge. “The problem is that under GDPR the definition of data privacy is broad, which may find many in hot water as they come to adopt IoT. This is because the regulation places far-reaching responsibilities on organisation’s to impose a specific ‘privacy by design’ requirement. What this means is that organisations must have in place the appropriate technical and organisational measures to ensure that data privacy is not an afterthought....”


Identity Mismanagement: Why the #1 Cloud Security Problem Is about to Get Worse

There are two major reasons why IAM is more difficult today than it has been before. One is the sheer scale of cloud deployments; the other is the increased frequency of identity-based cyberattacks. Let's take the problem of scale first. According to recent research, enterprises in 2017 expected to use an average of 17 cloud applications to support their IT, operations, and business strategies. So, it’s no surprise that 61 percent of respondents believe identity and access management (IAM) is more difficult today than it was even those two short years ago. With so many different systems in play at any one time, IAM is no longer just about having a rigorous tracking and authentication system in place. In many organizations, the computing cost of authentication and encryption now forms the primary bottleneck on network performance. The second reason why contemporary IAM is more difficult is the dramatic rise in cyberattacks based on compromising identity systems. A decade ago, most cybersecurity analysts were primarily focused on securing data against direct intrusion and theft attempts. 



Quote for the day:

"Let us never negotiate out of fear. But let us never fear to negotiate." -- John F. Kennedy

Daily Tech Digest - August 18, 2020

How eSIMs will aid mass market IoT development

This latest iteration of the ubiquitous SIM card, which has played a fundamental role in mobile telecommunications for over a quarter of a century, enables the SIM to be downloaded into a ‘Secure Element’ that can be permanently embedded inside any type of device, or thing. eSIMs can act as an authenticating party between the hardware device and service platform, to ensure end-to-end, chip-to-cloud security. Data can then be encrypted to protect against loss, theft, or tampering, with encryption available via zero-touch provisioning. ... This is the second wave of e-SIM hype. The initial industry hype a couple of years ago around the embedded version of the SIM card did not live up to preliminary expectations, in large part because the supply was there, but the demand was not. However, we are now seeing a resurgence, due to the demand increasing as IoT technologies mature and more – different – industries enter the IoT, and security increases due to legislation. An increasing number of operators too, are beginning to realise the cost benefits of these types of connections, opening up their networks to unlock the advantages of bundled, multi-device subscription plans, and new revenue opportunities, which is further driving demand. 


Combining DataOps and DevOps: Scale at Speed

We need to step away from organizing our teams and technologies around the tools we use to manage data, such as application creation, information management, identity access management and analytics and data science. Instead, we need to realize that data is a vital commodity, and to put together all those that use or handle data to take a data-centric view of the enterprise. When building applications or data-rich systems, development teams learn to look past the data delivery mechanics and instead concentrate on the policies and limitations that control data in their organization, they can align their infrastructure more closely to enable data flow across their organization to those who need it. To make the shift, DataOps needs teams to recognize the challenges of today's technology environment and to think creatively about specific approaches to data challenges in their organization. For example, you might have information about individual users and their functions, data attributes and what needs to be protected for individual audiences, as well as knowledge of the assets needed to deliver the data where it is required. Getting teams together that have different ideas helps the company to evolve faster. Instead of waiting minutes, hours or even weeks for data, environments need to be created in minutes and at the pace required to allow the rapid creation and delivery of applications and solutions.


Deepening Our Understanding Of Good Agile: General Issues

Kuhn distanced himself from the idea that a new theory in science was about the discovery of objective truth. Instead, he viewed each new scientific revolution or synthesis as “less problematic” and “more fruitful” than the previous synthesis, with fewer anomalies and greater predictive power and maybe greater simplicity and clarity. For example, Copernicus’s heliocentric theory of the galaxy had no greater predictive power than the previous earth-centric theory. But it won support because it was simpler and seemed more plausible. As it turned out, Copernicus’s theory involved the idea of rotating spheres which was dead wrong, but the heliocentric part turned out to be right. The theory won broad support, despite its flaws. It is in this sense that we should not be expecting to discover a theory of management that explains the objective truth about management or that prescribes the perfect organizational structure. We should be content if we can find a synthesis that has fewer anomalies and greater predictive power than the previous synthesis. That is so a fortiori for management compared to physical science, because human society is constantly changing, unlike the physical universe. So there is even less likelihood of attaining even temporary truth about the human universe.


Firms Still Struggle to Prioritize Security Vulnerabilities

The underlying problem is that once vulnerabilities have been identified by automated systems, the prioritization and patching process is mostly manual, which slows an organization's response, says Charles Henderson, global managing partner and head of IBM's cybersecurity services team, X-Force Red. "You think of vulnerability management as 'find a flaw, fix a flaw,'" he says. "The problem is that we have gotten really good at finding flaws, and we haven't seen ... as an industry the same attention paid to actually fixing stuff that we find." Patching continues to be a significant problem for most companies. Only 21% of organizations patch vulnerabilities in a timely manner, the survey found. More than half of companies cannot easily track how efficiently vulnerabilities are being patched, have enough resources to patch the volume of issues, nor have a common way of viewing assets and applications across the company. In addition, most organizations do not have the ability to tolerate the necessary downtime. Overall, most companies face significant challenges in patching software vulnerabilities, according to the survey of 1,848 IT and IT professionals by the Ponemon Institute for its State of Vulnerability Management in the Cloud and On-Premises report.


Cloudops tool integration is more important than the tools themselves

What’s missing is direct integration between the AIops tool and the security tool. Although they have different missions, they need each other. The security tool needs visibility into the behavior of all applications and infrastructure, considering that behaviors that are out of line with normal operations can often be tracked to security issues, such as DDoS attacks. At the same time, the cloudops tool could play some role in automatically defending the cloud-based systems, such as attempting a restart or taking other corrective action so the issue does not result in an outage. The recovery could be reported back to the security tool, which would take further action, such as blocking the IP address that is the source of the DDoS attack. This example describes security and ops tools working together, but there is much value in other tool integration as well. Configuration management, testing, special-purpose monitoring such as edge computing and IoT, data governance, etc., can all benefit from working together to create common automation between tools. The smarter cloud management and monitoring players, especially those selling AIops tools, have largely gotten the tool integration religion. 


How Active Cypher is Securing Enterprises from Malware Attacks

The cautious CIO should take the approach that their organization is already infected with ransomware. For the majority of ransomware attacks, user’s negligence is the problem. If a firm has employees, its only time until they get ransomware. Yet IT departments should stop playing roulette hoping that they are not the ones to fall this month, but should instead take a proactive approach to first securing their data end-to-end, through automated file-level encryption like what is offered through Active Cypher File Fortress. Secondly, they should utilize solutions like Ransom Data Guard that effectively shields clients from all permutations of ransomware attacks like WannaCry, RobbinHood, TeslaCrypt…by obfuscating data and actively countering malware when it attempts to attack. Employee cyber-training only gets you so far. ... The success of India’s economy and the rise of its companies have unfortunately led hackers to increasingly attack the country. Active Cypher’s Indian clients are addressed in a similar fashion as we currently handle global and non-North American clients – our product is not intensive in prep or installation and company IT teams can download and install very easily in half a day.


How robotics and automation could create new jobs in the new normal

“Contrary to some beliefs, I see robots as creating vast amounts of new jobs in the future,” he said. “Just like 50 years ago a website designer, vlogger, or database architect were not things, over the next 50 years we will see many new types of job emerge.” Nicholson cites robot pilots as an example. “Ubiquitous, truly autonomous robots are still a long way from reality, so with semi-autonomous capabilities with humans in the loop, we can achieve much better performance overall and generate a brand-new job sector,” he added. There’s a growing consensus that humans will work in conjunction with robots, performing complementary roles that play to their respective strengths. ... The robots generate a significant amount of performance data, which is automatically compiled into reports that need to be interpreted, assessed, and analyzed to improve operation and fleet performance. While much of this work could be incorporated into existing roles, such tasks may eventually require dedicated employees, leading to the creation of new jobs. “Managers can view the routes being cleaned, take a look at quantitative metrics such as run time and task frequency, and receive notifications around diagnostics and relevant software updates,” Spruijt said.


The Security Interviews: How Crest is remaking the future of consultancy

Now that the security marketplace has grown significantly and security services providers have gone from boutique outfits to big-name brands, this need is becoming greater than ever, says Glover. He adds that buyers are now realising that if they contract their security services to structured organisations that back up their technology claims with certified skills and best practice, they get better outcomes. He also reckons that security consultancy will soon begin to move from an advisory-based practice to an opinion-based practice. “We haven’t really done that as an industry yet, but I absolutely believe that is the direction of play,” he says. But what does that actually mean? Glover explains: “Right now, we provide advice and guidance. We look at your systems and we say ‘that’s not very good – you should correct it’. That’s advice. But what we’re now seeing under GDPR [General Data Protection Regulation] and other regulations is you are asked if you have taken appropriate steps to secure your data, otherwise the regulator is going to take regulatory action or fine you a lot of money. “So we are now moving into this area where security consultants have to be professional auditors and say, in our professional opinion, this organisation has or has not taken appropriate steps to secure its data. ...”


What working from home means for CISOs

It’s easy to understand why employees do what they do. CISOs have always had trouble convincing them that productivity and protection are not mutually exclusive — that users can do their jobs just as effectively by following policies, accepting security controls and using pre-approved apps and devices, and especially while working from home, the shift to productivity at all costs has threatened to disrupt this delicate balance. It comes as cyber criminals look to capitalise on distracted home workers, unprotected endpoints, overwhelmed VPNs, and distributed security teams who may be forced to focus on more pressing operational IT tasks. Google is blocking as many as 18 million Covid-themed malicious and phishing emails every day. It takes just one to get through and convince a remote worker to click, and the organisation may be confronted with the prospect of a debilitating ransomware outage, BEC-related financial loss, or damaging data breach. With many organisations struggling financially in the wake of government-mandated lockdowns, few will welcome the costs associated with a serious security incident. 


Web of Things Over IoT and Its Applications

Internet connectivity is a minor concern for low-level sensors or hardware devices. Low level sensors such as temperature sensor, and motion sensor, usually transfer data using low level protocols like Bluetooth Low Energy (BLE), Zigbee, 6LoWPAN, etc., which are not Internet compatible. Since IoT Gateways understand those low level protocols, they basically play the role of adapters between the internet and those sensors. Protocol transformation would also take place here. IoT gateways are installed inside smart homes, smart factories etc., i.e., inside Local Area Network where no unified communication standard is available, thus, those gateways can be used to communicate using proprietary data format over the internet. Additionally, there are multiple cloud vendors that are providing IoT services in different shapes and textures. Once again there is a lack of standardization. AWS Alexa is tied with Philips Hue so AWS and Hue can understand their data format but no one else can. This is gravitating towards the vendor lock-in black hole. To get rid of this problem, IoT needs vendor neutral standards for the internet.



Quote for the day:

"Leadership is the art of influencing people to execute your strategic thinking." -- Nabil Khalil Basma

Daily Tech Digest - August 17, 2020

Remote DevOps is here to stay!

With a mass exodus of the workforce towards a home setting, especially in India, the demand for skilled professionals in DevOps has dramatically increased. A recent GitHub report, on the implications of COVID on the developer community, suggests that developer activities have increased as compared to last year. This also translates to the fact that developers have shown resilience and continued to contribute, undeterred by the crisis. This is the shining moment for DevOps which is built for remote operations. In a ‘choose your own adventure’ situation, DevOps helps organizations evaluate their own goals, skills, bottlenecks, and blockers to curate a modern application development and deployment process that works for them. As per an UpGuard report, on DevOps Stats for Doubters, 63% organizations that implemented DevOps experienced improvement in the quality of their software deployments. Delivering business value from data is contingent on the developers’ ability to innovate through methods like DevOps. It is about deploying the right foundation for modern application development across both public and private clouds. The current environment is uncharted territory for many enterprises. 


Breaking Down Serverless Anti-Patterns

The goal of building with serverless is to dissect the business logic in a manner that results in independent and highly decoupled functions. This, however, is easier said than done, and often developers may run into scenarios where libraries or business logic or, or even just basic code has to be shared between functions. Thus leading to a form of dependency and coupling that works against the serverless architecture. Functions depending on one another with a shared code base and logic leads to an array of problems. The most prominent is that it hampers scalability. As your systems scale and functions are constantly reliant on one another, there is an increased risk of errors, downtime, and latency. The entire premise of microservices was to avoid these issues. Additionally, one of the selling points of serverless is its scalability. By coupling functions together via shared logic and codebase, the system is detrimental not only in terms of microservices but also according to the core value of serverless scalability. This can be visualized in the image below, as a change in the data logic of function A will lead to necessary changes in how data is communicated and processed in function B. Even function C may be affected depending on the exact use case.


Why Service Meshes Are Security Tools

Modern engineering organizations need to give individual developers the freedom to choose what components they use in applications as well as how to manage their own workflows. At the same time, enterprises need to ensure that there are consistent ways to manage how all of the parts of an application communicate inside the app as well as with external dependencies. A service mesh provides a uniform interface between services. Because it’s attached as a sidecar acting as a micro-dataplane for every component within the service mesh, it can add encryption and access controls to communication to and from services, even if neither are natively supported by that service. Just as importantly, the service mesh can be configured and controlled centrally. Individual developers don’t have to set up encryption or configure access controls; security teams can establish organization-wide security policies and enforce them automatically with the service mesh. Developers get to use whatever components they need and aren’t slowed down by security considerations. Security teams can make sure encryption and access controls are configured appropriately, without depending on developers at all. 


Review: AWS Bottlerocket vs. Google Container-Optimized OS

To isolate containers, Bottlerocket uses container control groups (cgroups) and kernel namespaces for isolation between containers running on the system. eBPF (enhanced Berkeley Packet Filter) is used to further isolate containers and to verify container code that requires low-level system access. The eBPF secure mode prohibits pointer arithmetic, traces I/O, and restricts the kernel functions the container has access to. The attack surface is reduced by running all services in containers. While a container might be compromised, it’s less likely the entire system will be breached, due to container isolation. Updates are automatically applied when running the Amazon-supplied edition of Bottlerocket via a Kubernetes operator that comes installed with the OS.  An immutable root filesystem, which creates a hash of the root filesystem blocks and relies on a verified boot path using dm-verity, ensures that the system binaries haven’t been tampered with. The configuration is stateless and /etc/ is mounted on a RAM disk. When running on AWS, configuration is accomplished with the API and these settings are persisted across reboots, as they come from file templates within the AWS infrastructure.


Microsoft tells Windows 10 users they can never uninstall Edge. Wait, what?

Microsoft explained it was migrating all Windows users from the old Edge to the new one. The update added: "The new version of Microsoft Edge gives users full control over importing personal data from the legacy version of Microsoft Edge." Hurrah, I hear you cry. That's surely holier than Google. Microsoft really cares. Yet next were these words: "The new version of Microsoft Edge is included in a Windows system update, so the option to uninstall it or use the legacy version of Microsoft Edge will no longer be available." Those prone to annoyance would cry: "What does it take not only to force a product onto a customer but then make sure that they can never get rid of that product, even if they want to? Even cable companies ultimately discovered that customers find ways out." Yet, as my colleague Ed Bott helpfully pointed out, there's a reason you can't uninstall Edge. Well, initially. It's the only way you can download the browser you actually want to use. You can, therefore, hide Edge -- it's not difficult -- but not completely eliminate it from your life. Actually that's not strictly true either. The tech world houses many large and twisted brains. They don't only work at Microsoft. Some immediately suggested methods to get your legacy Edge back on Windows 10. Here's one way to do it.


Digital public services: How to achieve fast transformation at scale

For most public services, digital reimagination can significantly enhance the user experience. Forms, for example, can require less data and pull information directly from government databases. Texts or push notifications can use simpler language. Users can upload documents as scans. In addition, agencies can link touchpoints within a single user journey and offer digital status notifications. Implementing all of these changes is no trivial matter and requires numerous actors to collaborate. Several public authorities are usually involved, each of which owns different touchpoints on the user journey. The number of actors increases exponentially when local governments are responsible for service delivery. Often, legal frameworks must be amended to permit digitization, meaning that the relevant regulator needs to be involved. Yet when governments use established waterfall approaches to project management (in which each step depends on the results of the previous step), digitization can take a long time and the results often fall short. In many cases, long and expensive projects have delivered solutions that users have failed to adopt.


State-backed hacking, cyber deterrence, and the need for international norms

The issue of how cyber attack attribution should be handled and confirmed also deserves to be addressed. Dr. Yannakogeorgos says that, while attribution of cyber attacks is definitely not as clear-cut as seeing smoke coming out of a gun in the real world, with the robust law enforcement, public private partnerships, cyber threat intelligence firms, and information sharing via ISACs, the US has come a long way in terms of not only figuring out who conducted criminal activity in cyberspace, but arresting global networks of cyber criminals as well. Granted, things get trickier when these actors are working for or on behalf of a nation-state. “If these activities are part of a covert operation, then by definition the government will have done all it can for its actions to be ‘plausibly deniable.’ This is true for activities outside of cyberspace as well. Nations can point fingers at each other, and present evidence. The accused can deny and say the accusations are based on fabrications,” he explained. “However, at least within the United States, we’ve developed a very robust analytic framework for attribution that can eliminate reasonable doubt amongst friends and allies, and can send a clear signal to planners on the opposing side...."


Tackling Bias and Explainability in Automated Machine Learning

At a minimum, users need to understand the risk of bias in their data set because much of the bias in model building can be human bias. That doesn't mean just throwing out variables, which, if done incorrectly, can lead to additional issues. Research in bias and explainability has grown in importance recently and tools are starting to reach the market to help. For instance, the AI Fairness 360 (AIF360) project, launched by IBM, provides open source bias mitigation algorithms developed by the research community. These include bias mitigation algorithms to help in the pre-processing, in-processing, and post-processing stages of machine learning. In other words, the algorithms operate over the data to identify and treat bias. Vendors, including SAS, DataRobot, and H20.ai, are providing features in their tools that help explain model output. One example is a bar chart that ranks a feature's impact. That makes it easier to tell what features are important in the model. Vendors such as H20.ai provide three kinds of output that help with explainability and bias. These include feature importance as well as Shapely partial dependence plots (e.g., how much a feature value contributed to the prediction) and disparate impact analysis. Disparate impact analysis quantitatively measures the adverse treatment of protected classes.


Chief Data Analytics Officers – The Key to Data-Driven Success?

Core to the role is the experience and desire to use data to solve real business problems. Combining an overarching view of the data across the organisation, with a well-articulated data strategy, the CDAO is uniquely placed to balance specific needs for data against wider corporate goals. They should be laser-focused on extracting value from the bank’s data assets and ‘connecting-the-dots’ for others. By seeing and effectively communicating the links between different data and understanding how it can be combined to deliver business benefit, the CDAO does what no other role can do: bring the right data from across the business, plus the expertise of data scientists, to bear on every opportunity. Balance is critical. Leveraging their understanding of analytics and data quality, the CDAO can bring confidence to business leaders afraid to engage with data. They understand governance, and so can police which data can be used for innovation and which is business critical and ‘untouchable.’ They can deploy and manage data scientists to ensure they are focused on real business issues not pet analytics projects. Innovation-focused CDAOs will actively look for ways to generate returns on data assets, and to partner with commercial units to create new revenue from data insights.


How the network can support zero trust

One broad principle of zero trust is least privilege, which is granting individuals access to just enough resources to carry out their jobs and nothing more. One way to accomplish this is network segmentation, which breaks the network into unconnected sections based on authentication, trust, user role, and topology. If implemented effectively, it can isolate a host on a segment and minimize its lateral or east–west communications, thereby limiting the "blast radius" of collateral damage if a host is compromised. Because hosts and applications can reach only the limited resources they are authorized to access, segmentation prevents attackers from gaining a foothold into the rest of the network. Entities are granted access and authorized to access resources based on context: who an individual is, what device is being used to access the network, where it is located, how it is communicating and why access is needed. There are other methods of enforcing segmentation. One of the oldest is physical separation in which physically separate networks with their own dedicated servers, cables and network devices are set up for different levels of security. While this is a tried-and-true method, it can be costly to build completely separate environments for each user's trust level and role.



Quote for the day:

"Gratitude is the place where all dreams come true. You have to get there before they do." -- Jim Carrey

Daily Tech Digest - August 16, 2020

When to use Java as a Data Scientist

When you are responsible for building an end-to-end data product, you are essentially building a data pipeline where data is fetched from a source, features are calculated based on the retrieved data, a model is applied to the resulting feature vector or tensor, and the model results are stored or streamed to another system. While Python is great for modeling training and there’s tools for model serving, it only covers a subset of the steps in this pipeline. This is where Java really shines, because it is the language used to implement many of the most commonly used tools for building data pipelines including Apache Hadoop, Apache Kafka, Apache Beam, and Apache Flink. If you are responsible for building the data retrieval and data aggregating portions of a data product, then Java provides a wide range of tools. Also, getting hands on with Java means that you will build experience with the programming language used by many big data projects. My preferred tool for implementing these steps in a data workflow is Cloud Dataflow, which is based on Apache Beam. While many tools for data pipelines support multiple runtime languages, there many be significantly performance differences between the Java and Python options.


Alert: Russian Hackers Deploying Linux Malware

Analysts have linked Drovorub to the Russian hackers working for the GRU, the alert states, noting that the command-and-control infrastructure associated with this campaign had previously been used by the Fancy Bear group. An IP address linked to a 2019 Fancy Bear campaign is also associated with the Drovorub malware activity, according to the report. The Drovorub toolkit has several components, including a toolset consisting of an implant module coupled with a kernel module rootkit, a file transfer and port forwarding tool as well as a command-and-control server. All this is designed to gain a foothold in the network to create the backdoor and exfiltrate data, according to the alert. "When deployed on a victim machine, the Drovorub implant (client) provides the capability for direct communications with actor-controlled [command-and-control] infrastructure; file download and upload capabilities; execution of arbitrary commands as 'root'; and port forwarding of network traffic to other hosts on the network," according to the alert. Steve Grobman, CTO at the security firm McAfee, notes that the rootkit associated with Drovorub can allow hackers to plant the malware within a system and avoid detection, making it a useful tool for cyberespionage or election interference.


How Community-Driven Analytics Promotes Data Literacy in Enterprises

Data is deeply integrated into the business processes of nearly every company precisely because it is helping us make better decisions and not because of its ability to hasten lofty things, such as digital transformation. The C-suite sees the advantages data insights provide and as a result, non-technical employees are increasingly expected to be more technically adept at extraction and interpretation of data. Successful organizations foster a community of data curious teams and empower them with a single platform that enables everyone, regardless of technical ability, to explore, analyze and share data. Furthermore, domain experts and business leaders must be able to generate their own content, build off of content created by others and promote high-value, trustworthy content, while also demoting old, inaccurate, or unused content. This should resemble an active peer review process where helpful content is promoted and bad content is flagged as such by the community, while simultaneously being managed and governed by the data team.


The Anatomy of a SaaS Attack: Catching and Investigating Threats with AI

SaaS solutions have been an entry point for cyber-attackers for some time – but little attention is given to how the Techniques, Tools & Procedures (TTPs) in SaaS attacks differ significantly from traditional TTPs seen in networks and endpoint attacks. This raises a number of questions for security experts: how do you create meaningful detections in SaaS environments that don’t have endpoint or network data? How can you investigate threats in a SaaS environment? What does a ‘good’ SaaS environment look like as opposed to one that’s threatening? A global shortage in cyber skills already creates problems for finding security analysts able to work in traditional IT environments – hiring security experts with SaaS domain knowledge is all the more challenging. ... A more intricate and effective approach to SaaS security requires an understanding of the dynamic individual behind the account. SaaS applications are fundamentally platforms for humans to communicate – allowing them to exchange and store ideas and information. Abnormal, threatening behavior is therefore impossible to detect without a nuanced understanding of those unique individuals: where and when do they typically access a SaaS account, which files are they like to access, who do they typically connect with? 


How to maximise your cloud computing investment

“At the core of the issue is that with a conventional, router-centric approach, access to applications residing in the cloud means traversing unnecessary hops through the HQ data centre, resulting in inefficient use of bandwidth, additional cost, added latency and potentially lower productivity,” said Pamplin. “To fully realise the potential of cloud, organisations must look to a business-driven networking model to achieve greater agility and substantial CAPEX and OPEX savings. “When it comes to cloud usage, a business-driven network model should also give clear application visibility through a single pane of glass, or else organisations will be in the dark regarding their application performance and, ultimately, their return on investment. “Only through utilisation of advanced networking solutions, where application policies are centrally defined based on business intent, and users are connected securely and directly to applications wherever they reside, can the benefits of the cloud be truly realised. “A business-driven approach eliminates the extra hops and risk of security compromises. This ensures optimal and cost-efficient cloud usage, as applications will be able to run smoothly while fully supported by the network. ..."


AI Needs To Learn Multi-Intent For Computers To Show Empathy

Wael ElRifai, VP for solution engineering at Hitachi Vantara reminds us that teaching a chatbot multi-intent is a more manual process than we’d like to believe. He says that its core will be actions like telling the software to search for keywords such as “end” or “and”, which act as connectors for independent clauses, breaking down a multiple intent query into multiple single-intent queries and then using traditional techniques. “Deciphering intent is far more complex than just language interpretation. As humans, we know language is imbued with all kinds of nuances and contextual inferences. And actually, humans aren’t that great at expressing intent, either. Therein lies the real challenge for developers,” said ElRifai.  ... “In many cases, that’s what you need, but when we look more broadly at the kinds of problems that businesses face, across many different industries, the vast majority of problems actually don’t follow that ‘one thing well’ model all that well. Many of the things we’d like to automate are more like puzzles to be solved, where we need to take in lots of different kinds of data, reason about them and then test out potential solutions,” said IBM’s Cox.


Code Obfuscation: A Comprehensive Guide Towards Securing Your Code

Since code obfuscation brings about deep changes in the code structure, it may bring about a significant change in the performance of the application as well. In general, rename obfuscation hardly impacts performance, since it is only the variables, methods, and class which are renamed. On the other hand, control-flow obfuscation does have an impact on code performance. Adding meaningless control loops to make the code hard to follow often adds overhead on the existing codebase, which makes it an essential feature to implement, but with abundant caution. A rule of thumb in code obfuscation is that more the number of techniques applied to the original code, more time will be consumed in deobfuscation. Depending on the techniques and contextualization, the impact on code performance usually varies from 10 percent to 80 percent. Hence, potency and resilience, the factors discussed above, should become the guiding principles in code obfuscation as any kind of obfuscation (except rename obfuscation) has an opportunity cost. Most of the obfuscation techniques discussed above do place a premium on the code performance, and it is up to the development and security professionals to pick and choose techniques best suited for their applications.


Designing a High-throughput, Real-time Network Traffic Analyzer

Run-to-completion is a design concept which aims to finish the processing of an element as soon as possible, avoiding infrastructure-related interferences such as passing data over queues, obtaining and releasing locks, etc. As a data-plane component, sensitive to latency, the Behemoth’s (and some supplementary components) design relies on that concept. This means that, once a packet is diverted into the app, its whole processing is done in a single thread (worker), on a dedicated CPU core. Each worker is responsible for the entire mitigation flow – pulling the traffic from a NIC, matching it to a policy, analyzing it, enforcing the policy on it, and, assuming it’s a legit packet, returning it back to the very same NIC. This design results in great performance and negligible latency, but has the obvious disadvantage of a somewhat messy architecture, since each worker is responsible for multiple tasks. Once we’d decided that AnalyticsRT would not be an integral “station” in the traffic data-plane, we gained the luxury of using a pipeline model, in which the real-time objects “travel” between different threads (in parallel), each one responsible for different tasks.


RASP A Must-Have Thing to Protect the Mobile Applications

The concept of RASP is found to be very much effective because it helps in dealing with the application layer attacks. The concept also allows us to deal with custom triggers so that critical components or never compromised in the business. The development team should also focus on the skeptical approach about implementing the security solutions so that impact is never adverse. The implementation of these kinds of solutions will also help to consume minimal resources and will ensure that overall goals are very well met and there is the least negative impact on the performance of the application. Convincing the stakeholders was a very great issue for the organizations but with the implementation of RASP solutions, the concept has become very much easy because it has to provide mobile-friendly services. Now convincing the stakeholders is no more a hassle because it has to provide clear-cut visibility of the applications along with the handling of security threats so that working of solutions in the background can be undertaken very easily. The implementation of this concept is proven to be a game-changer in the company and helps to provide several aspects so that companies can satisfy their consumers very well. The companies can use several kinds of approaches which can include binary instrumentation, virtualization, and several other things.


Cyber Adversaries Are Exploiting the Global Pandemic at Enormous Scale

For cyber adversaries, the development of exploits at-scale and the distribution of those exploits via legitimate and malicious hacking tools continue to take time. Even though 2020 looks to be on pace to shatter the number of published vulnerabilities in a single year, vulnerabilities from this year also have the lowest rate of exploitation ever recorded in the 20-year history of the CVE List. Interestingly, vulnerabilities from 2018 claim the highest exploitation prevalence (65%), yet more than a quarter of firms registered attempts to exploit CVEs from 15 years earlier in 2004. Exploit attempts against several consumer-grade routers and IoT devices were at the top of the list for IPS detections. While some of these exploits target newer vulnerabilities, a surprising number targeted exploits first discovered in 2014 – an indication the criminals are looking for exploits that still exist in home networks to use as a springboard into the corporate network. In addition, Mirai (2016) and Gh0st (2009) dominated the most prevalent botnet detections, driven by an apparent growing interest by attackers targeting older vulnerabilities in consumer IoT products.



Quote for the day:

"Nothing is so potent as the silent influence of a good example." -- James Kent