Daily Tech Digest - October 18, 2020

How Robotic Process Automation Can Become More Intelligent

Artificial Intelligence (AI) and its intrinsic disciplines, including Machine Learning (ML), Natural Language Processing (NLP), and so forth, help to acquire the learning and decision-making abilities in an RPA task. Basically, RPA is for doing. Artificial intelligence is for contemplating ‘what should be done’. Artificial intelligence makes RPA intelligent. Together, these advances offer ascent to Cognitive Automation, which automates many use cases, which were just inconceivable before. The most recent transformation was the point when the virtualized platforms permitted the expansion and expulsion of assets required for processes dependent on the workloads. This permitted the organizations to investigate opportunities to characterize their processes based on automated rules. This was the development of Robotic process automation. RPA goes above and beyond making the monotonous process automated so human intercession is lost. A straightforward application for this could be rule-based reactions you need to accommodate certain work processes. When you code in the Rules once they don’t need any kind of intervention and the RPA deals with everything. Organizations have profited by executing RPA based solutions and processes to reduce expenses multiple times.


So You Want to Be a Data Protection Officer?

The GDPR states the Data Protection Officer must be capable of performing their duties independently, and may not be “penalized or dismissed” for performing those duties. (The DPO’s loyalties are to the general public, not the business. The DPO’s salary can be considered a tax for doing business on the internet.) Philip Yannella, a Philadelphia attorney with Ballard Spahr, said: “A Data Protection Officer can’t be fired because of the decisions he or she makes in that role. That spooks some U.S. companies, which are used to employment at will. If a Data Protection Officer is someone within an organization, he or she should be an expert on GDPR and data privacy.” Not having a Data Protection Officer could get quite expensive, resulting in stiff fines on data processors and controllers for noncompliance. Fines are administered by member state supervisory authorities who have received a complaint. Yannella went on to say, “No one yet knows what kind of behavior would trigger a big fine. A lot of companies are waiting to see how this all shakes out and are standing by to see what kinds of companies and activities the EU regulators focus on with early enforcement actions.”


The State of Enterprise Architecture and IT as a Whole

EA is an enterprise-wide, business-driven, holistic practice to help steer companies towards their desired longer-term state, to respond to planned and unplanned business and technology change. Embracing EA Principles is a central part of EA, though rarely adopted. The focus in those early days was reducing complexity by addressing duplication, overlap, and legacy technology. With the line between technology and applications blurring, and application sprawl happening almost everywhere, a focus on rationalizing the application portfolio soon emerged. I would love to say that EA adoption was smooth, but there were many distractions and competing industry trends, everything from ERP to ITIL to innovation. The focus was on delivery and operations, and there was little mindshare for strategic, big-picture, and longer-term thinking. Practitioners were rewarded only for supporting project delivery. Many left the practice. And frankly, a lot of people who didn’t have EA-skills were thrust into the role. That further exacerbated adoption challenges and defined the delivery-oriented technology-focused path EA would follow. It is still dominant today.


Managing and Governing Data in Today’s Cloud-First Financial Services Industry

Artificial intelligence and machine learning technologies have proven to accelerate the ability for banks, insurance companies, and retail brokerages to successfully combat fraud, manage risk, cross sell and upsell, and provide tailored services to existing clients. To harness the power and potential of these solutions, financial institutions will look to leverage external data from third-party vendors and partners and in-house data to mine for the best answers and recommendations. Today’s cloud-native and cloud-first solutions offer financial institutions the ability to capture, process, analyze, and leverage the intelligence from this data much faster, more efficiently, and more effectively than trying to do it internally. Improving Customer Experience Through Digital Modernization: Banks and insurance companies have been modernizing and/or replacing legacy core systems, many of which have been around for decades, with cloud-native and cloud-first solutions. These include offerings from organizations like FIS Global, nCino, and my former employer EllieMae in the banking industry, and offerings from Guidewire and DuckCreek for cloud-native policy administration, claims, and underwriting solutions in the insurance sector.


Optimizing Privacy Management through Data Governance

Data governance is responsible for ensuring data assets are of sufficient quality, and that access is managed appropriately to reduce the risk of misuse, theft, or loss. Data governance is also responsible for defining guidelines, policies, and standards for data acquisition, architecture, operations, and retention among other design topics. In the next blog post, we will discuss further the segregation of duties shown in figure 1; however, at this point it is important to note that modern data governance programs need to take a holistic view to guide the organization to bake quality and privacy controls into the design of products and services. Privacy by design is an important concept to understand and a requirement of modern privacy regulations. At the simplest level it means that processes and products that collect and or process personal information must be architected and managed in a way that provides appropriate protection, so that individuals are not harmed by the processing of their information nor by a privacy breach. Malice is not present in all privacy breaches. Organizations have experienced breaches related to how they managed physical records containing personal information, because staff were not trained to properly handle the information.


The Definitive Guide to Delivering Value from Data using DataOps

The DataOps solution to the hand-over problem is to allow every stakeholder full access to all process phases and tie their success to the overall success of the entire end-to-end process ... Value delivery is a sprint, not a relay. Treat the data-to-value process as a unified team sprint to the finish line (business value) rather than a relay race where specialists pass the baton to each other in order to get to the goal. It is best to have a unified team spanning multiple areas of expertise responsible for overall value delivery instead of single specialized groups responsible for a single process phase. ... A well architected data infrastructure accelerates delivery times, maximizes output, and empowers individuals. The DevOps movement played an influential role in decoupling and modularizing software infrastructure from single monolithic applications to multiple fail-safe microservices. DataOps aims to bring the same influence to data infrastructure and technology. ... At its core, DataOps aims to promote a culture of trust, empowerment, and efficiency in organizations. A successful DataOps transformation needs strategic buy-in starting from C-suite executives to individual contributors.


How do Organizations Choose a Cloud Service Provider? Is it Like an Arranged Marriage?

While not as critical a decision as marriage, most organizations today face a similar trust-based dilemma- which cloud service provider to trust with their data? There is no debate over the clear value drivers for cloud computing- performance, cost and scalability to name a few. However, the lack of control and oversight could make organizations hesitant to hand over their most valuable asset- information, to a third party, trusting they have adequate information protection controls in place. With any trust-based decision, external validation can play an important role. Arranged marriages rely on positive feedback and references, mostly attested by the matchmaker. It also relies on supporting evidence such as corroborations of relatives and more tangible factors such as education/career history of the potential bride/ groom. In case of cloud service providers, independent validation such as certifications, attestation or other information protection audits could make or break a deal. The notion of cloud computing may have existed as far back as the 1960s but cloud services took the form we know of today with the launch of services from big players such as Amazon, Google and Microsoft in 2006-2007. 


Professor creates cybersecurity camp to inspire girls to choose STEM careers

The way I got into cybersecurity, I got into cybersecurity I would say maybe five years ago. But in the field of IT, I always like to pull things apart, and figure out how they work and problem solve. I was always in the field of IT. I worked as a programmer at IBM for a couple of years, and then I segued into the academy, because I felt I could be more impactful in front of a classroom. In an IBM setting and programming setting, I noticed I was one only woman and woman of color in that field. I said, "OK, I need to do something to change this." I went into the Academy and said, "Maybe if I was an instructor, I could then empower more young women to go and pursue this field of study." Then five years, as time went on, the cybersecurity discipline really became very hot. And really, that was very, very intriguing, how hackers were hacking in and sabotaging systems. Again, it was like a puzzle, problem solving, how can we, out-think the hacker, and how can we make things safe? That became very, very intriguing to me. Then when I wrote this grant, the GenCyber grant, which Dr. Li-Chiou Chen, my chair at the time, recommended that I explore a grant for GenCyber and I submitted it, and I was shocked that I won the grant.


Germany’s blockchain solution hopes to remedy energy sector limitations

If successfully executed, Morris explained that BMIL could serve as the basis for a wide range of DERs supporting both Germany’s wholesale and retail electricity markets: “This will make it easy, efficient and low cost for any DER in Germany to participate in the energy market. Grid operators and utility providers will also gain access to an untapped decarbonized Germany energy system.” However, technical challenges remain. Mamel from DENA noted that BMIL is a project built around the premise of interoperability — one of blockchain’s greatest challenges to date. While DENA is technology agnostic, Mamel explained that DENA aims to test a solution that will be applicable to the German energy sector, which already consists of a decentralized framework with many industry players using different standards. As such, DENA decided to take an interoperability approach to drive Germany’s energy economy, testing two blockchain development environments in BMIL. Both Ethereum and Substrate, the blockchain-building framework for Polkadot, will be applied, along with different concepts regarding decentralized identity protocols.


How to Overcome the Challenges of Using a Data Vault

Within the data vault approach, there are certain layers of data. These range from the source systems where data originates, to a staging area where data arrives from the source system, modeled according to the original structure, to the core data warehouse, which contains the raw vault, a layer that allows tracing back to the original source system data, and the business vault, a semantic layer where business rules are implemented. Finally, there are data marts, which are structured based on the requirements of the business. For example, there could be a finance data mart or a marketing data mart, holding the relevant data for analysis purposes. Out of these layers, the staging area and the raw vault are best suited to automation. The data vault modeling technique brings ultimate flexibility by separating the business keys, which uniquely identify each business entity and do not change often, from their attributes. These results, as mentioned earlier, in many more data objects being in the model, but also provides a data model that can be highly responsive to changes, such as the integration of new data sources and business rules.



Quote for the day:

"The closer you get to excellence in your life, the more friends you'll lose. People love average and despise greatness." -- Tony Gaskins

Daily Tech Digest - October 17, 2020

Data literacy skills key to cost savings, revenue growth

"The bottom line is that bad data is costly, because decision-makers, managers, data scientists and others who have to work with data have to compensate for that bad data," she said. "That's time-consuming, but the real cost of that bad data is that it's an obstacle in their journey to become insights-driven." To prevent those losses -- and to help people make data-driven decisions that have the potential to spur revenue growth -- organizations should enable employees with data literacy skills. Employees need an education in data. Data-driven companies simply grow faster, Belissent said, noting that Forrester has studied hundreds of companies. And organizations do want to be data-driven, she continued, adding that 88% of those surveyed by Forrester want to improve the use of data insights in their decision-making. But if their data is low quality, or if the data isn't there at all, it serves as a significant impediment to growth. And in fact, according to Forrester's research, fewer than half of all decisions are made based on quantitative analysis. Organizations, therefore, need to implement training programs to give employees the data literacy skills -- the ability to evaluate, work with, communicate and apply data -- to do their jobs.


Real Time APIs in the Context of Apache Kafka

One of the challenges that we have always faced in building applications, and systems as a whole, is how to exchange information between them efficiently whilst retaining the flexibility to modify the interfaces without undue impact elsewhere. The more specific and streamlined an interface, the likelihood that it is so bespoke that to change it would require a complete rewrite. The inverse also holds; generic integration patterns may be adaptable and widely supported, but at the cost of performance. Events offer a Goldilocks-style approach in which real-time APIs can be used as the foundation for applications which is flexible yet performant; loosely-coupled yet efficient. Events can be considered as the building blocks of most other data structures. Generally speaking, they record the fact that something has happened and the point in time at which it occurred. An event can capture this information at various levels of detail: from a simple notification to a rich event describing the full state of what has happened. From events, we can aggregate up to create state—the kind of state that we know and love from its place in RDBMS and NoSQL stores. 


Emotional AI — can chatbots convey empathy?

Maya Angelou once said — “I’ve learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.” So, since emotions are our most human quality, what if we could teach artificial intelligence (AI) to understand our feelings? In recent years, AI and machine learning algorithms have held the world spellbound with the rapid pace of development and integration in various industries and verticals. The goal of AI research has shifted over the years; to compute what humans could not, to beat us in specific tasks, and most recently to create an algorithm that can show how it’s working. To put how rapidly AI is growing in context, a Pew Research Center study reports that by 2025, AI and robotics will permeate most segments of daily life, while another an Oxford University Study projects that within the next 25 years, developed nations will experience job loss rates of up to 47%. AI is displacing the roles of both white and blue-collar workers, from travel agents to bank tellers, gas station attendants to factory workers. This has tremendous implications for industries such as home maintenance, transport and logistics, healthcare, and most significantly, customer service.


The brain of the SIEM and SOAR

What the nerves need is a brain that can receive and interpret their signals. An XDR engine, powered by Bayesian reasoning, is a machine-powered brain that can investigate any output from the SIEM or SOAR at speed and scale. This replaces the traditional Boolean logic (that is searching for things that IT teams know to be somewhat suspicious) with a much richer way to reason about the data. This additional layer of understanding will work out of the box with the products an organization already has in place to provide key correlation and context. For instance, imagine that a malicious act occurs. That malicious act is going to be observed by multiple types of sensors. All of that information needs to be put together, along with the context of the internal systems, the external systems and all of the other things that integrate at that point. This gives the system the information needed to know the who, what, when, where, why and how of the event. This is what the system’s brain does. It boils all of the data down to: “I see someone bad doing something bad. I have discovered them. And now I am going to manage them out.” What the XDR brain is going to give the IT security team is more accurate, consistent results, fewer false positives and faster investigation times.


Pachyderm and the power of GitHub Actions: MLOps meets DevOps

The kinds of problems we face in machine learning are fundamentally different than the ones we face in traditional software coding. Functional issues, like race conditions, infinite loops, and buffer overflows, don’t come into play with machine learning models. Instead, errors come from edge cases, lack of data coverage, adversarial assault on the logic of a model, or overfitting. Edge cases are the reason so many organizations are racing to build AI Red Teams to diagnose problems before things go horribly wrong. It’s simply not enough to port your CI/CD and infrastructure code to machine learning workflows and call it done. Handling this new generation of machine learning operations (MLOps) problems requires a brand new set of tools that focus on the gap between code-focused operations and MLOps. The key difference is data. We need to version our data and datasets in tandem with the code. That means we need tools that specifically focus on data versioning, model training, production monitoring, and many others unique to the challenges of machine learning at scale. Luckily, we have a strong tool for MLOps that does seamless data version control: Pachyderm. 


Microsoft: Learn JavaScript Node.js with this new free course

The Node.js course teaches beginners what they need to know to build things like web servers, microservices, command-line apps, web interfaces, drivers for database access, desktop apps using Electron, IoT client and server libraries for single-board computers like Raspberry Pi, machine-learning models and more. Yohan Lasorsa, a senior Microsoft cloud developer advocate and main host of the Node.js series, recommends students complete the JavaScript video series before starting the Node.js series. To accompany the video tutorials, Microsoft has also published an extensive interactive Node.js course consisting of five modules. The modules include an introduction to Node.js that explains what it is, how it works, and when it could be useful. The second module explains how to use dependencies obtained from the NPM registry, while the third takes students through debugging Node.js apps with the built-in debugger and the debugger available in Microsoft's Visual Studio Code (VS Code) editor. The fourth and fifth modules teach students how to work with files and directories in Node.js apps and how to build a web API with Node.js and the Express.js framework for adding things like authentication.


Exponential growth in DDoS attack volumes

We recognize the scale of potential DDoS attacks can be daunting. Fortunately, by deploying Google Cloud Armor integrated into our Cloud Load Balancing service—which can scale to absorb massive DDoS attacks—you can protect services deployed in Google Cloud, other clouds, or on-premise from attacks. We recently announced Cloud Armor Managed Protection, which enables users to further simplify their deployments, manage costs, and reduce overall DDoS and application security risk. Having sufficient capacity to absorb the largest attacks is just one part of a comprehensive DDoS mitigation strategy. In addition to providing scalability, our load balancer terminates network connections on our global edge, only sending well-formed requests on to backend infrastructure. As a result it can automatically filter many types of volumetric attacks. For example, UDP amplification attacks, synfloods, and some application-layer attacks will be silently dropped. The next line of defense is the Cloud Armor WAF, which provides built-in rules for common attacks, plus the ability to deploy custom rules to drop abusive application layer requests using a broad set of HTTP semantics.


Best Practices for Managing Remote IT Teams

Many DBAs and developers have been working remotely for months now, but as IT budgets grow tighter, they’ll need to do more with less. Ensuring DBAs have the ability to monitor the database from anywhere will be a core part of a continued successful remote working strategy. There are many reasons for database professionals to embrace remote monitoring, whether it’s migrating to the cloud, adapting to new challenges, keeping an eye on multiple instances in many environments or gaining fine-grained access to monitoring data. ... Cloud adoption is up significantly this year as development teams turn to it, particularly for greenfield projects. But with all of that data migration, database professionals are struggling with being able to monitor cloud-based servers alongside on-premises servers, and having a distributed team doesn’t make it easier. Adopting remote monitoring tools can simplify monitoring of the cloud—once you’re monitoring a remote database server it doesn’t matter where the server is. It’s impossible to say what might happen next month or even next year, but as companies grapple with these cloud challenges, advanced remote monitoring tools can help monitor disparate, hybrid environments from one screen.


Hearing The World Through Machine Learning

With ML, companies can apply cutting-edge technology to transform an age-old problem. Startups are leveraging deep learning and advanced signal processing at a granularity not previously possible to improve hearing quality.  Some incumbent hearing aid companies have recently touted their ability to add “AI” features such as Alexa integrations and step counters. Unfortunately, these features don’t seem to improve actual hearing quality nor take advantage of true ML capabilities beyond generating marketing buzz. ... In my conversation with Andre Esteva, the Head of Medical AI at Salesforce, he noted that “traditional approaches have been limited by extensive manual efforts to acquire data, hand-craft it into a usable format, prepare rudimentary algorithms and deploy them to devices. In contrast, ML has a natural flywheel effect in which devices collect data at scale, ML training protocols automatically process the data, update themselves and redeploy. The effect is a significant reduction in product feedback cycles and an increase in the range of capabilities available. The beauty of this approach is that the underlying intelligence improves over time as the neural nets go through iterative training.”


Q&A on the Book Leading with Uncommon Sense

It is a three-step practice that includes pausing, introspecting, and acting. It requires leaders to continually cycle through the three steps: pause, introspect and act. At the core of the practice is the need to slow down. Leaders can pause both in the moment when reacting to a difficult situation or in a planned, proactive way to prepare for challenges and to harvest learnings. When introspecting, leaders look inward and examine their own thoughts or feelings, carefully investigating what is happening with their thinking. Introspecting allows leaders to pay attention to four areas: recognizing what is outside of our awareness, learning from our emotions, tracking the impact of social identities, and embracing uncertainty. After investigating these four areas and gathering useful information, leaders are in a better position to take action. Finally, by pausing and introspecting, we argue that leaders are in a better position to take action. In addition, we know that leaders cannot allow themselves to be paralyzed by the complexities of any given moment and that they must have the courage to make decisions and take action in the very face of that complexity.



Quote for the day:

"Just because you can't have what you want NOW doesn't mean never. Be patient, persistent and resourceful." -- Tim Fargo

Daily Tech Digest - October 16, 2020

New Emotet attacks use fake Windows Update lures

According to an update from the Cryptolaemus group, since yesterday, these Emotet lures have been spammed in massive numbers to users located all over the world. Per this report, on some infected hosts, Emotet installed the TrickBot trojan, confirming a ZDNet report from earlier this week that the TrickBot botnet survived a recent takedown attempt from Microsoft and its partners. These boobytrapped documents are being sent from emails with spoofed identities, appearing to come from acquaintances and business partners. Furthermore, Emotet often uses a technique called conversation hijacking, through which it steals email threads from infected hosts, inserts itself in the thread with a reply spoofing one of the participants, and adding the boobytrapped Office documents as attachments. The technique is hard to pick up, especially among users who work with business emails on a daily basis, and that is why Emotet very often manages to infect corporate or government networks on a regular basis. In these cases, training and awareness is the best way to prevent Emotet attacks. Users who work with emails on a regular basis should be made aware of the danger of enabling macros inside documents, a feature that is very rarely used for legitimate purposes.


Prolific Cybercrime Group Now Focused on Ransomware

Overall, the group does not display sophisticated tactics, techniques and procedures (TTPs), but they are aggressive in their attempts to gain a foothold in companies, says Kimberly Goody, senior manager of the Mandiant threat intelligence financial crime team at FireEye. "The main thing that sets this group apart from our perspective is how widespread their campaigns are," she says. "They are sophisticated, but they have a wide reach. And their constant evolution of their TTPs—even though minor—can prevent organizations from being able to adequately defend against their spam campaigns." The group also highlights a trend observed by FireEye. Since early 2019, financial cybercrime groups once focused on stealing payment-card data are now shifting to compromising corporate networks, infecting a significant number of systems with ransomware, and then extorting the business for large sums, Goody says. "Point of sale intrusions were very profitable, and we saw actors such as FIN6 and FIN7—all the way back to FIN5—they were targeting payment card data," Goody says.


Agile: 4 signs your transformation is in trouble

True culture change requires more than a shot in the arm. The shot in the arm jolts the team awake and gets them moving, but from that moment the old culture drags everyone back where they started, so you have to fight against it. If you started with fun and creativity (or just never got there), look for opportunities to light the path toward a more creative and fun world at a leadership level. Virtual happy hours are fine, but, especially during COVID, you need to go further than that to set the example. Maybe you throw in a game. Maybe you have an appetizer delivered to each person’s house. Maybe you give each person $30 to surprise a teammate with a personal encouragement. No matter the approach, bring back the fun and joy and you’ll boost creativity from your agile teams. When you go to the gym and you only lift weights to strengthen your biceps, they get stronger while your leg muscles stay the same (or get weaker). The same thing happens in agile and produces similarly disproportionate results. Focusing on agility in one part of the organization (like the software teams), but not the leadership that fills their funnel, actually builds fragility into your business.


Critical SonicWall VPN Portal Bug Allows DoS, Worming RCE

“VPN bugs are tremendously dangerous for a bunch of reasons,” he told Threatpost. “These systems expose entry points into sensitive networks and there is very little in the way of security introspection tools for system admins to recognize when a breach has occurred. Attackers can breach a VPN and then spend months mapping out a target network before deploying ransomware or making extortion demands.” Adding insult to injury, this particular flaw exists in a pre-authentication routine, and within a component (SSL VPN) which is typically exposed to the public internet. “The most notable aspect of this vulnerability is that the VPN portal can be exploited without knowing a username or password,” Young told Threatpost. “It is trivial to force a system to reboot…An attacker can simply send crafted requests to the SonicWALL HTTP(S) service and trigger memory corruption.” However, he added that a code-execution attack does require a bit more work. “Tripwire VERT has also confirmed the ability to divert execution flow through stack corruption, indicating that a code-execution exploit is likely feasible,” he wrote, adding in an interview that an attacker would need to also leverage an information leak and a bit of analysis to pull it off.


Avoiding Serverless Anti-Patterns with Observability

New adopters of serverless are more susceptible to anti-patterns, so not being aware of — or not understanding the effect of — these anti-patterns, may be frustrating. So it acts as a barrier to serverless adoption. Observability mitigates this black box effect, and understanding the possible anti-patterns allows us to monitor the right metrics and take the right actions. Therefore, this article goes through some of the major anti-patterns unique to serverless and describes how the right strategy in observability can cushion the impact of anti-patterns creeping into your serverless architectures. Serverless applications tend to work best when asynchronous. This is a concept that was preached by Eric Johnson in his talk at ServerlessDays Istanbul, titled “Thinking Async with Serverless.” He later on went to present a longer version of the talk at ServerlessDays Nashville. As teams and companies begin to adopt serverless, one of the biggest mistakes they can make is designing their architecture while still having a monolith mentality. This results in a lift and shift of their previous architectures. This means the introduction of major controller functions and misplaced await functions.


Only the Agile Survive in Today’s Ever-Changing Business Environment

It’s almost inevitable that you’ll end up overlooking a vital document or missing a key contract in the hectic rush. Scrabbling around for all the relevant files and folders causes your confidence to leak away as you feel that you’re just not ready for this deal, and I’ve often seen that become a self-fulfilling prophecy. One company I consulted for learned this lesson when a well-known international consumer goods brand showed interest in buying their logistics business. Although the CEO had been hoping to arrange an exit on favorable terms, the CFO wasn’t on board and hadn’t made any advance preparations for due diligence situations. The prospective buyer was only in town for three days and wanted to look over their documents and agree on a preliminary contract before she left, but the CFO was so rattled by the pressure that he presented a profit and loss statement from the wrong year. The buyer declined to continue with the negotiations, and the CFO was left knowing that he’d let a great deal slip through his fingers simply because he didn’t have all of his books digitized and organized in a secure, centralized resource.


Singapore Launches IoT Cybersecurity Labelling

The Cybersecurity Labelling Scheme will focus first on Wi-Fi routers and smart home hubs, according to the Cyber Security Agency of Singapore. "Amid the growth in number of IoT products in the market, and in view of the short time-to-market and quick obsolescence, many consumer IoT products have been designed to optimize functionality and cost over security," the Cyber Security Agency says. "As a result, many devices are being sold with poor cybersecurity provisions, with little to no security features built-in." ... Singapore's program is voluntary for manufacturers for now, but the nation intends eventually to make it mandatory. The testing has four rating levels, and the CSA has offered detailed information for manufacturers. Developers can make declarations that their products conform with the first two levels. The first level means a product meets basic security requirements, such as mandating the use of unique passwords and delivering software updates as dictated by the European Telecommunications Standards Institute's EN 303 645 standard. The second level encompasses the first level requirements plus following the IoT Cyber Security Guide developed by Singapore's Infocomm Media Development Authority, or IMDA.


Why AI can’t ever reach its full potential without a physical body

A designer can’t effectively build a software sense-of-self for a robot. If a subjective viewpoint were designed in from the outset, it would be the designer’s own viewpoint, and it would also need to learn and cope with experiences unknown to the designer. So what we need to design is a framework that supports the learning of a subjective viewpoint. Fortunately, there is a way out of these difficulties. Humans face exactly the same problems but they don’t solve them all at once. The first years of infancy display incredible developmental progress, during which we learn how to control our bodies and how to perceive and experience objects, agents and environments. We also learn how to act and the consequences of acts and interactions. Research in the new field of developmental robotics is now exploring how robots can learn from scratch, like infants. The first stages involve discovering the properties of passive objects and the “physics” of the robot’s world. Later on, robots note and copy interactions with agents (carers), followed by gradually more complex modelling of the self in context. In my new book, I explore the experiments in this field.


Singapore releases AI ethics, governance reference guide

Noting that AI sought to inject intelligence into machines to mimic human action and thought, SCS President Chong Yoke Sin noted that rogue or misaligned AI algorithms with unintended bias could cause significant damage. This underscored the importance of ensuring AI was used ethically. "On the other hand, stifling innovation in the use of AI will be disastrous as the new economy will increasingly leverage AI," Chong said, as she stressed the need for a balanced approach that prioritised human safety and interests. Speaking during SCS' Tech3 Forum, Singapore's Minister for Communications and Information S. Iswaran further underscored the need to build trust with the responsible use of AI in order to drive the adoption and extract the most benefits from the technology. "Responsible adoption of AI can boost companies' efficiencies, facilitate decision-making, and help employees upskill into more enriching and meaningful jobs," Iswaran said. "Above all, we want to build a progressive, safe, and trusted AI environment that benefits businesses and workers, and drives economic transformation." The launch of a reference guide would provide businesses access to a counsel of experts proficient in AI ethics and governance, so they could deploy the technology responsibly, the minister said.


How to ensure faster, quality code to ease the development process

If there’s one metric most businesses are focused on when it comes to coding, it’s speed. Tech and dev teams are at the forefront of innovation, and they’re used to moving at a serious pace. Anything that slows down the process of shipping code damages their ability to perform. To move quickly though, and to get from planning to coding in record time, teams need real-time visibility into what’s being worked on and transparent access to the latest updates from the team. Closed-off communication, like email, which limits visibility of information to a handful of people selected by a single sender, isn’t up to the task. Instead, channel-based communication can provide a single-space for developers to collaborate, share priorities and simplify processes in order to speed up testing and deployment. Rather than having to sift through information flying in from different sources, channel-based messaging integrates all existing tools into a single place, meaning developers can increase visibility over deploys and get straight to the information they need. Developers can pull in key material using integrations that plug different apps like Jira and Github right into their discussions.



Quote for the day:

"A coach is someone who can give correction without causing resentment." -- John Wooden

Daily Tech Digest - October 15, 2020

6 Reasons Why Internal Data Centers Won’t Disappear

Most companies are moving to a hybrid computing model, which is a mix of on premises and cloud-based IT. The value of a hybrid computing approach is that it gives organizations agility and flexibility. You have the option of insourcing or outsourcing systems whenever there is a business or technology need do so. By adopting a hybrid strategy, companies can also take advantage of the best strategic, operational and cost options. In some cases, a “best choice” might be to outsource to the cloud. In other cases, an in-house option might be preferable. Here is an example: A large company with a highly customized ERP system from a well-known vendor acquires a smaller company. Operationally, the desire is to move the newly acquired, smaller company onto the enterprise in-house ERP system, but there are so many customized programs and interfaces that the company decides instead to move the new company onto a cloud-based, generic version of the software. The advantage is the newly acquired company gets acclimated to the features and functions of the ERP system. Going forward, the parent company has the option of either migrating the new company over to the corporate ERP system, and being able to perform this migration without being rushed, or deciding to join the newly acquired company by migrating enterprise ERP to the cloud .


What is cryptography? How algorithms keep information secret and safe

Secret key cryptography, sometimes also called symmetric key, is widely used to keep data confidential. It can be very useful for keeping a local hard drive private, for instance; since the same user is generally encrypting and decrypting the protected data, sharing the secret key is not an issue. Secret key cryptography can also be used to keep messages transmitted across the internet confidential; however, to successfully make this happen, you need to deploy our next form of cryptography in tandem with it. ... In public key cryptography, sometimes also called asymmetric key, each participant has two keys. One is public, and is sent to anyone the party wishes to communicate with. That's the key used to encrypt messages. But the other key is private, shared with nobody, and it's necessary to decrypt those messages. To use a metaphor: think of the public key as opening a slot on a mailbox just wide enough to drop a letter in. You give those dimensions to anyone who you think might send you a letter. The private key is what you use to open the mailbox so you can get the letters out. The mathematics of how you can use one key to encrypt a message and another to decrypt it are much less intuitive than the way the key to the Caesar cipher works.


Mitigating Business Risks in Your 5G Deployment

For 5G networks to thrive, the underlying architecture will be distributed in the cloud and will no longer be dependent on dedicated appliances. The corresponding implementation and deployment of the carriers’ networks will evolve to expand capacity, reduce latency, lower costs and reduce necessary power requirements. To reinforce this open environment, organizations using 5G will have to virtualize their network functions, resulting in less control over the physical elements of the networks in exchange for the 5G benefits in infrastructure. Services are also no longer restricted to service providers’ networks and can originate from external network domains. This means that services can rely on physically closer, virtualized network resources to the connected device for more efficient delivery. 5G architectures will rely on a software-defined networking/network functions virtualization (SDN/NFV)-supported foundation for their transition to the cloud. This change to the network infrastructure leads to corresponding deviations to the cyberattack threat landscape. 5G will utilize the concept of network slicing to enable service providers to “slice” portions of a spectrum to offer specialized services for specific device types, all the while remaining in the same physical infrastructure.


Microsoft fights botnet after Office 365 malware attack

According to filed court documents, Microsoft sought permission to take over domains and servers belonging to the malicious Russia-based group. It also wanted legal assent to block IP addresses associated with the plot and prevent the entities behind it from purchasing or leasing servers. The requests were part of a grander plan of action to destroy data stored in the hackers' systems. The intention was first to block access to servers controlling over 1 million infected machines. This move would be a crucial step in halting control of over an additional 250 million breached email addresses. Microsoft has said that Trickbot’s strategy was mostly successful because it used a custom third-party Office 365 app. Tricking users into installing it allowed perpetrators to bypass passwords instead of relying on the OAuth2 token. Through this technique, they could access compromised Microsoft 365 user accounts and sensitive data associated with them, such as email content and contact lists. In the court documents, Microsoft laments that Trickbot used authentic-looking Microsoft email addresses and other company information to malign its clients. It argues that the network used its name and infrastructure for malicious purposes, thereby tarnishing its image.


Breaking Serverless on Purpose with Chaos Engineering

“You should stop when something goes wrong, even if you are not running it in production. You should stop just to understand how you are going to roll back when such things happen,” Samdan said. He echoed what Liz Fong-Jones said in her ChaosConf talk: that you should absolutely intentionally plan when you have your chaos experiments and let everyone know ahead. “You don’t need to surprise other people. You don’t need to surprise other departments. And, most importantly, in production, your customers should know about it,” he said. So if something goes terribly wrong, they aren’t worried because you talked about it ahead and you already had a plan to roll back which you also shared with them. Chaos gets way more complicated in serverless environments, which are highly distributed and event-driven. Risks with serverless tend to come from the services you don’t have insight or control over. Essentially, serverless is chaotic at its heart. With serverless you inherit a whole new set of failures, within its many resources ... He says a common fix for serverless issues is to aim for asynchronous communication whenever possible and then properly tune synchronous timeouts. Other serverless fixes include putting circuit breakers in place and using exponential backoff to find an acceptable rate of pacing retransmissions.


Audit .NET/.NET Core Apps with Audit.NET and AWS QLDB

With the flow of a request through the system new information is added to the audit event, like the component name, the identity or the user name of the executing request, how was the data before it was altered, how is data after modification, timestamps, machine names, a common identifier to correlate the request through the components and any other type of information that might be needed to identify the request with other systems. This operation is vital for some business, so often is considered part of the transaction: the cancellation of a contract is considered successful if also there is a record in the audit log trail. One could rely on the ILogger interfaces to implement this requirement, but there are few problems: it could be easily turned off, failing to send a message to log won't crash the application and it does not have specialized primitives for audit logging. ... Audit.NET is an extensible framework to audit executing operations in .NET and .NET Core. It comes with two types of extensions: the data providers (or the data sinks) and the interaction extensions. The data providers are used to store the audit events into various persistent storages and the interaction extensions are used to create specialized audit events based on the executing context like Entity Framework, MVC, WCF, HttpClient, and many others.


WFH has left workers feeling abandoned.

One in three employees admitted that being away from the office had lowered their morale, with respondents reporting that they feel distracted during their work day, and easily stressed out at work. What's more: there seems to be consensus that employers have not gone far enough in supporting their workforce. Less than a quarter of employees in the US and Europe received guidance from their employer on working remotely on topics ranging from tips on new ways to work, to data security best practices. But despite the potential difficulties of working from home day-in, day-out, HP's research found that office workers are keeping an eye on the bigger picture – and that overall, respondents seemed positive about the future. The majority of employees surveyed agreed that the new ways of working caused by the crisis would allow them to change their work environments for the better. Over the past few months, workers have been gauging what the future holds for their nine-to-five, and preparing accordingly. The survey shows that many employees have identified continuous learning and upskilling as key to their success, and have lost no time in re-training themselves. From leadership skills to foreign languages through IT and tech support knowledge, almost six in ten respondents said that they were currently learning at least one new skill, often through free online programs.


Twitter hack probe leads to call for cybersecurity rules for social media giants

The report concludes this is a problem U.S. lawmakers need to get on and tackle stat — recommending that an oversight council be established (to “designate systemically important social media companies”) and an “appropriate” regulator appointed to ‘monitor and supervise’ the security practices of mainstream social media platforms. “Social media companies have evolved into an indispensable means of communications: more than half of Americans use social media to get news, and connect with colleagues, family, and friends. This evolution calls for a regulatory regime that reflects social media as critical infrastructure,” the NYSDFS writes, before going on to point out there is still “no dedicated state or federal regulator empowered to ensure adequate cybersecurity practices to prevent fraud, disinformation, and other systemic threats to social media giants”. “The Twitter Hack demonstrates, more than anything, the risk to society when systemically important institutions are left to regulate themselves,” it adds. “Protecting systemically important social media against misuse is crucial for all of us — consumers, voters, government, and industry. The time for government action is now.”


Google, Intel Warn on ‘Zero-Click’ Kernel Bug in Linux-Based IoT Devices

The flaw, which Google calls “BleedingTooth,” can be exploited in a “zero-click” attack via specially crafted input, by a local, unauthenticated attacker. This could potentially allow for escalated privileges on affected devices. “A remote attacker in short distance knowing the victim’s bd [Bluetooth] address can send a malicious l2cap [Logical Link Control and Adaptation Layer Protocol] packet and cause denial of service or possibly arbitrary code execution with kernel privileges,” according to a Google post on Github. “Malicious Bluetooth chips can trigger the vulnerability as well.” The flaw (CVE-2020-12351) ranks 8.3 out of 10 on the CVSS scale, making it high-severity. It specifically stems from a heap-based type confusion in net/bluetooth/l2cap_core.c. A type-confusion vulnerability is a specific bug that can lead to out-of-bounds memory access and can lead to code execution or component crashes that an attacker can exploit. In this case, the issue is that there is insufficient validation of user-supplied input within the BlueZ implementation in Linux kernel. Intel, meanwhile, which has placed “significant investment” in BlueZ, addressed the security issue in a Tuesday advisory, recommending that users update the Linux kernel to version 5.9 or later.


There’s no better time to join the quantum computing revolution

It’s an exciting time to be in quantum information science. Investments are growing across the globe, like the recently announced U.S. Quantum Information Science Research Centers, that bring together the best of the public and private sectors to solve the scientific challenges on the path to a commercial-scale quantum computer. While there’s increased research investment worldwide, there are not yet enough skilled developers, engineers, and researchers to take advantage of this emerging quantum revolution.  Here’s where you come in. There’s no better time to start learning about how you can benefit from quantum computing, and solve currently unsolvable questions in the future. Here are some of the resources available to start your journey. Many developers, researchers, and engineers are intrigued by the idea of quantum computing, but may not have started because perhaps they don’t know how to begin, how to apply it, or how to use it in their current applications. We’ve been listening to the growing global community and worked to make the path forward easier. Take advantage of these free self-paced resources to learn the skills you need to get started with quantum.



Quote for the day:

"Tomorrow's leaders will not lead dictating from the front, nor pushing from the back. They will lead from the centre - from the heart" -- Rasheed Ogunlaru

Daily Tech Digest - October 14, 2020

Financial crime group FIN11 pivots to ransomware and stolen data extortion

Despite casting a wide net with its phishing campaigns, FIN11 choses to perform deeper compromises on only a small subset of its victims, which are likely selected based on their size, industry and likelihood of paying. Like several other sophisticated ransomware gangs, FIN11 uses manual hacking to move laterally through networks and deploy its ransomware, so the group might not have enough manpower to do this on a large scale. If a victim looks interesting, after the initial intrusion the FIN11 attackers deploy multiple backdoors with the goal of moving laterally and obtaining domain administrator privileges. Even though its exclusive tools like FlawedAmmyy and MIXLABEL are used to gain the initial foothold, the lateral movement activity involves the use of many publicly available tools. This is similar to how an increasing number of hacker groups operate. Once domain admin credentials have been obtained, the attackers use various tools to disable Windows Defender and deploy the CLOP ransomware to hundreds of computers using Group Policy Objects. FIN11's ransom notes include only an email address for victims to contact and do not specify a ransom amount, suggesting the ransom is later customized based on who the victim organization is.


How to ignite a mainframe transformation with three key mindset changes

There’s often a misconception that IT departments have to plan their entire mainframe transformation at the same time, which usually leads to delays and pushback from teams who believe the effort is simply too ambitious, or fear it will take too long to achieve. It’s important to remember that mainframe teams usually have a backlog of essential, customer-impacting work to complete, so it’s difficult to take resources away from those tasks to support an internal transition project. It’s far more effective to break the transformation down into smaller steps, using Agile thinking to enable incremental change, and establish continuous feedback and improvements. Instead of trying to build a complete environment for Agile delivery on the mainframe, it’s better to break the process down into steps, using shorter sprints to manage the transition and mitigate any risk and resource constraints. Start by modernising a single aspect of mainframe delivery, such as improving the developer experience with an integrated development environment (IDE), then add automated testing processes, or application analysis and visualisation in stages, to avoid overwhelming teams with a major transition project all at once. It also helps get more people on board, by allowing them to see the benefits of each step before they take the next one.


Agile resilience in the UK: Lessons from COVID-19 for the ‘next normal’

Alongside establishing a guiding purpose, the most effective organizations focused on more frequent communications, taking an adult-to-adult tone that explained decisions and shared a realistic assessment. During the COVID-19 pandemic at UK Power Networks, for example, the CEO shared daily video messages showing the rationale behind corporate decisions. Feedback from employees demonstrated the positive effect of this clear communication and transparency. For organizations that have found a new focus during the COVID-19 crisis, the next key step should be to consider if they can enhance and develop their common purpose to hold true in more normal times, giving employees the same clarity of decision making and ability to act as during the COVID-19 crisis. Agile organizations often speak of a shared purpose and vision—the “North Star”—which helps people feel personally and emotionally invested in the organization. This North Star allows employees to individually and proactively watch for changes in customer preferences and the external environment, and then, act upon them. ... The second shared practice we found was that organizations created new forums and structures, or repurposed existing ones, to act as rapid-decision-making bodies.


Build Next-Generation Cloud Native Applications with the SMOKE Stack

Enterprise technology needs to help organizations take action in real time. Doing this effectively means modernizing application architecture from batch processing to event-driven. Serverless computing is an event-driven architecture that abstracts infrastructure, so developers can focus on writing the application code. With serverless, application teams don’t need to worry about the complexity of maintaining, patching, supporting and paying for infrastructure that they need on an elastic basis. This makes serverless perfect as the glue to integrate services from anywhere. At TriggerMesh, we think serverless is only the beginning. The real power comes from what serverless enables. Serverless architectures allow even the largest enterprises with years or decades of legacy code to break out of the constraints of their own data centers and a single cloud. Open source, standards and specifications free enterprise developers to mash-up services from on-premises and any cloud, to rapidly compose event-driven applications that support high velocity — so that you can bring new features and products to market fast.


Ransomware: It’s time to bring cybersecurity audits up to GDPR status

According to Check Point, the number of daily ransomware attacks worldwide has increased by half over the past three months -- close to doubling in the United States alone -- as threat actors take advantage of the operational disruption and rapid shift to home working caused by COVID-19. Ezat Dayeh, Senior Engineer Manager UK&I at Cohesity, told ZDNet in an interview that the company has seen a recent and "dramatic" increase in the volumes of ransomware incidents. As more people are working from home due to COVID-19, this may have introduced new risk factors -- but the increasing sophistication of such attacks is of concern, too. "When we think about two or three years ago, when people were hit with ransomware, nine out of ten times they would basically say, "it's definitely impacted production, we've got issues, but we can go back to our backups," and worst-case scenario, we will just do a restore," Dayeh said. "But now, with that sophistication, the bad guys know this. Ransomware can come into a network [and] it won't do anything but it will start looking around and see what it can access on the network."  


Facebook’s New Open Source Framework For Training Graph-Based ML Models

The use of WFST data structure is prevalent among speech recognition, natural language processing, and handwriting recognition applications. WFST, especially in the speech recognition systems, provides a common and natural representation for the hidden Markov models (HMM), context-dependency, grammar, pronunciation dictionaries, and weighted determinization algorithms to optimise time and space requirement. One of the most popular WFST-based products is the Kaldi toolkit for speech recognition which is trained to decode speeches. Kaldi heavily relies on OpenFST, which is an open-source WFST toolkit. To understand the importance of GTN framework for a WFST graph, we consider a general speech recogniser. A speech recogniser consists of an acoustic model that predicts the letters in the speech, its language model, and also identifies the word that may follow. These models are represented as WFSTs and are trained separately before combining to output the most likely transcription. It is, at this juncture, that the GTN library steps in to train the different models, which in turn provides better results. Before GTN, the use of the individual graphs at the training time was implicit, and the graph structure needed to be hard-coded in the software. 


What will quantum computing mean for business?

There are four main areas that are already a focus of attention. Cybersecurity is the obvious first one, because if quantum computers render existing encryption worthless, they can also be used to produce more secure algorithms, random number generators and keys that can’t be defeated by their own processing prowess. The other areas revolve around the capacity quantum computing has for comparing lots of different possibilities and finding the optimum one amongst them or best fit. For example, in financial services this could provide portfolio optimisation, high-frequency trading advantages, and more efficient fraud detection. Goldman Sachs, RBS and Citigroup are already recruiting towards taking advantage of these possibilities. Logistics is another obvious beneficiary. Traffic management, delivery route optimisation, and other traffic-related problems are finding potential quantum solutions, with Daimler and Honda already aiming to acquire quantum computers for these kinds of activities. Similarly, manufacturing, pharmaceuticals, and materials science can optimise their processes, such as the manufacturing supply chain. Existing quantum computers with just 50 qubits are delivering good results for applications such as protein folding and new drug formula discovery.


Windows “Ping of Death” bug revealed – patch now!

Interestingly, the bug you see triggering in the video above that provokes the BSoD is caused by a buffer overflow. TCPIP.SYS doesn’t correctly check the size of one of the data fields that can optionally appear in IPv6 ICMP packets, so you can shove too much data at it and corrupt the system stack. Bang! Down it goes. Two decades ago, almost any stack-based buffer overflow on Windows could be used not only to crash a system, but also, with a bit of care and planning,to take over the processor’s flow of execution and divert it into a program fragment – known as shellcode – of your own choosing. In other words, Windows stack overflows in neworking software almost always used to lead to so-called remote code execution exploits, where attackers could trigger the bug from afar with specially crafted network traffic, run code of their own choosing, and thereby inject malware without you even being aware. But numerous security improvements in Windows, from Windows XP SP3 onwards, have made stack overflows harder and harder to exploit, and these days they can often only be used to force crashes, not to take over completely. Nevertheless, a malcontent on your network who could crash any computers at will, servers and laptops alike, could cause plenty of harm just through what’s known as a denial of service attack, especially because recovering from each crash requires a complete reboot.


The CISO’s newest responsibility: Building trust

As part of this evolution, CISOs have had to build confidence among all stakeholders—customers, partners, employees, board members and other executives—that they and their security teams have the organization’s best interests in mind when it comes to cybersecurity decisions. ... “Things are all upside down now. No one is working the same, and there’s a lot of discomfort out there. So as a security person you have to build that trust. It’s part of your job, and it’s what you get paid to do,” says Gene Fredriksen, a veteran security executive now serving as executive director of the National Credit Union Information Sharing & Analysis Organization (NCU-ISAO) and cybersecurity principal for Pure IT Credit Union Services. ... The CISO’s capacity to cultivate trust is more than an esoteric discussion or business-school exercise: Experts say it’s an essential element for any CISO who wants to be successful in the role because it enables him or her to enact the policies, procedures and technologies needed to secure the organization and, thus, prove to others—including customers—that their interactions with the company are safe.


Data Analytics Without a Plan is Like Panning for Gold

Of the many lessons COVID-19 has to teach, data analysis is one of the least appreciated. A lack of quality data has led to unanswerable questions about the availability of ventilators, hospital beds, and personal protective equipment. Poor data collection has hindered contact tracing efforts. In a pandemic, collecting the right data and applying it in the right way can save lives. A hospital in Boston was lauded for using a forecasting model to anticipate how many bags of blood it would need. Singapore, one of the countries with the slowest spread of COVID-19, uses blockchain and analytics to reduce exposures through contact tracing. Many of the economy’s heavy hitters, like Amazon and Facebook, were designed from the outset to apply data. If a shopper looks repeatedly at an item on Amazon, the site will show similar items, adjust the price, or offer promotions to prod a purchase. Facebook’s Cambridge Analytica scandal demonstrates what can happen when data is applied indiscriminately. People felt violated by the depth of information the company was able to glean from their internet use. 



Quote for the day:

"Leaders lead when they take positions, when they connect with their tribes, and when they help the tribe connect to itself." -- Seth Godin

Daily Tech Digest - October 13, 2020

MLOps: More Than Automation

For MLOps to learn from DevOps, we must center the needs of data scientists and the people that are impacted by their models first. It isn’t enough to say that practicing MLOps means advocating for automation and monitoring at all steps to do things faster. Without this focus, we will see an increase in the deployment of models that have uninspected and unintended consequences that often disproportionately impact marginalized communities. So, as a data scientist, what is it that I need? Keeping up with the latest and greatest event streaming services, distributed systems or methods of continuous deployment of integration isn’t where my mind lights up. I would like to spend most of my time understanding the domain space of the model I’m about to build, the nuanced impact of that model and whether it’s going to meet the needs of my customers and the people they serve. There are a few ways to notice if you’re applying MLOps basically as a Band-Aid, a way to just go faster, that will ultimately break down. When looking for a solution to automate, consider if you’re only reducing the work required for manual processes or if you’re also enabling data scientists to focus on the hard problems they’re trained to tackle.


6 Signs DevSecOps Maturity Has a Long Way to Go

Nevertheless, AppSec teams still struggle on many fronts to bake security into the process of delivering software, and the vast majority of organizations are early on in their DevSecOps journey. According to another recent study conducted WhiteSource, only 20% of organizations believe they’ve reached full DevSecOps maturity. And 73% of respondents say they feel forced to compromise on security to meet short development lifecycles. Which is fine in a lot of situations, because what is risk management but a constant exercise in compromise? It’s all about weighing the risks against the benefits of a certain activity, and coming up with a balance in action and controls that minimize the risk while maximizing the benefits. The problem for DevSecOps today is that the indicators show there’s still little rigor or due diligence to come up with a disciplined method for determining that balance, let alone executing on it. ... The disconnect on what DevOps pros prioritize over time—security work versus innovation and feature delivery—ultimately comes down to how they’re measured and incentivized by their bosses. Many executive teams may pay lip service to the need of better cooperation between security , 44% according to security pros interviewed in the Ponemon study. 


Half of all virtual appliances have outdated software and serious vulnerabilities

"Poor processes account for the product age problem in many cases," Orca said in its report. "Out-of-date products remain available after they’ve reached their end-of-life. The overall product is no longer supported, the operating systems may be unsupported, and/or updates and patches are no longer being applied. As a result of Orca Security’s research, 39 products have been removed from distribution." Commercial appliances scored about the same on average as free and open-source ones, with the latter having a slight advantage. However, hardened virtual appliances whose operating systems and software stacks had been stripped down to minimize attack surface, scored much higher than all other appliances -- 94.2 on average. Over half of tested appliances came from system integrators. These images have all the necessary components to run certain Web applications -- for example an image with WordPress, but also the Apache Web server and MySQL database and the OpenSSL security library. Their average score was 77.6, which is close to the overall average score for all appliances, but lower than those from security vendors.


CPRA: More opportunity than threat for employers

The CPRA is actually a lot more lenient than the GDPR in regard to how it polices the relationship between employers and employees’ data. Unlike for its EU equivalent, there are already lots of exceptions written into the proposed Californian law acknowledging that worker-employer relations are not like consumer-vendor relations. Moreover, the CPRA extends the CCPA exemption for employers, set to end on January 1, 2021. This means that if the CPRA passes into law, employers would be released from both their existing and potential new employee data protection obligations for two more years, until January 1, 2023. This exemption would apply to most provisions under the CPRA, including the personal information collected from individuals acting as job applicants, staff members, employees, contractors, officers, directors, and owners. However, employers would still need to provide notice of data collection and maintain safeguards for personal information. It’s highly likely that during this two-year window, additional reforms would be passed that might further ease employer-employee data privacy requirements. While the CPRA won’t change much overnight, impacted organizations shouldn’t wait to take action, but should take this time to consider what employee data they collect, why they do so, and how they store this information.


Digital transformation: 3 hard truths

Digital transformation projects that are born as “IT initiatives” run the risk of being viewed as changes for the sake of new technology. Digital transformations must be viewed as business transformations, with business leaders not only buying into the proposed plans and value but driving the organizational and process changes that are needed to be successful. The widespread adoption of technologies means an organization doesn’t gain a competitive edge when it uses them, but rather how it uses them. Success lies in creating balanced IT-business partnerships that provide experts from both technical and business domains so new technologies can be integrated deep into the business. Intel’s AI projects are a perfect example of this in practice. Together, IT and the business have been able to achieve over $500 million in business value in 2019. Digital transformation isn’t a “from->to” process that reaches a static, determined “end state.” Today’s competitive pressures and the pace of technological change are simply too great to allow for a transformation to ever be “finished.” We need to view digital transformation as always evolving, always underway – with leaders and businesses embracing a dynamic state of constant disruption.


Ransomware operators now outsource network access exploits to speed up attacks

"Since the start of 2020 and the emergence of the now-popular "ransomware with data theft and extortion" tactics, ransomware gangs have successfully utilized dark web platforms to outsource complicated aspects of a network compromise," the researchers say. "A successful ransomware attack hinges on the development and maintenance of stable network access which comes with a higher risk of detection and requires time and effort. Access sellers fill this niche market for ransomware groups." As of September this year, Accenture has tracked over 25 persistent network access sellers -- alongside the occasional one-off -- and more are entering the market on a "weekly basis." Many of the sellers are active on the same underground forums haunted by ransomware groups including Maze, NetWalker, Sodinokibi, Lockbit, and Avaddon. Sellers have now begun touting their offerings on single forum threads, rather than separate posts, and RDP remains a popular option for network access. In an interesting twist, rather than sell-off a zero-day vulnerability to one seller, some traders are using these unpatched bugs to exploit numerous corporate networks and sell access to threat actors in separate bundles to generate additional revenue.


What 5G brings to IoT today and tomorrow

IoT devices today are mostly connected via cabled technologies, Engarto says. These include both shielded twisted-pair LAN and coaxial cables. “In some limited areas Wi-Fi may have some usage,” but is not always ideal, she says. “5G enables many more sensors to be put in place without a need for cable and conduit for each cable,” Engarto says. But the newer wireless technology “will be one of many networking solutions designed to address IoT’s full needs,” says Patrick Filkins, senior research analyst, IoT and mobile network infrastructure, at research firm International Data Corp. (IDC). “For example, 5G can address endpoints that require any breadth of latency, reliability, and security,” Filkins says. “While 5G will be a Swiss-army knife solution to IoT, all from a single platform, some enterprises may not need the full breadth of 5G’s capabilities. In many cases, such as LPWAN [low-power WAN], you can achieve connectivity through alternatives such as LoRaWAN.” Wi-Fi 6 and Wi-Fi HaLoW will also play a role in dense, shorter-range IoT use cases, Filkins says, although with a potential loss in reliability. “5G is an uplift from LTE when it comes to promising zero downtime communications, by baking in new technologies enabling near-zero packet loss,” Filkins says.


Why India’s Proposed Data Protection Authority Needs Constitutional Entrenchment

The DPA has been entrusted the role of a fourth branch institution, primarily due to its overarching role in protecting the fundamental right to privacy of citizens against not only possible transgressions of such privacy by the private sector but also possibly by the government itself. As opposed to a sectoral regulator, it is a sector-agnostic body and has wide powers cutting across sectors and economic spheres. It is empowered to penalise both Central and state governments when they fail to protect an individual’s personal data. In fact, it is also empowered to monitor sensitive data processed by other fourth branch watchdogs such as the CAG and the EC and even more significantly, the Legislature and Judiciary itself. As such, the DPA carries out crucial fourth branch oversight and accountability functions against almost all institutions of governance in our system. Why does the DPA, in its current form, lack the independence needed to be a strong fourth branch institution and ward off attempts of political interference? This is primarily attributable to the fact that its structure and composition was inspired from sectoral regulators such as SEBI, IRDA and TRAI, based on the recommendation by the Financial Sector Legislative Reforms Commission as mentioned in the Justice B.N. Srikrishna committee report.


Automation and AI: Challenges and Opportunities

Today, it is widely acknowledged that automation and AI technologies will gradually transform the global workplace, with intelligent machines performing human tasks in some cases and aiding the human in other cases. The presence of robotic machines in the workplace will ultimately increase efficiency and reduce costs. As a result, many human occupations will disappear, while others will adapt to technology-enabled roles. ... Although businesses have shown a recent trend of hiring AI developers at a breakneck speed to fulfill their in-house automation needs, few understand the fundamental challenges that this technology brings with it. As a result, the “AI comfort zone” is still missing in enterprise business circles, and business operators are still doubtful about the cost benefits associated with AI. Everywhere you look today, you come across automated machines or systems driven by powerful computers, multi-channel data, and very smart algorithms. The modern society is grappling with chat bots, PDAs, self-driving vehicles on roads, and automated check-outs in grocery stores. ... Although Data Governance is still a concern among most business operators, it is widely accepted that augmented intelligence has the capability of emulating the human decision-making process. 


Microsoft India Announces Public Preview of Power Automate Desktop Solution

Power Automate Desktop is a part of Microsoft Power Automate service and is claimed to enable coders and non-coders alike to automate processes and tasks across desktop and web applications with minimal effort from a single intelligent platform. According to sources, the design environment allows non-coders to automate processes quickly without writing a single line of code. It also provides complete control and flexibility for advanced users, programmers and developers in a scalable and secure environment. It further democratises the RPA capabilities within Power Automate by providing a desktop automation option for citizen developers and business users. Irina Ghose Executive Director of Cloud Solutions, Microsoft India stated, “Organisations and IT departments are seeking ways to quickly adapt to the unprecedented pace of change across every industry around the world. With Microsoft Power Automate Desktop, we aim to empower organisations automate tasks across the desktop and web, using an integrated platform to complete tasks at speed and scale.”



Quote for the day:

"You get in life what you have the courage to ask for." -- Nancy D. Solomon