Daily Tech Digest - December 24, 2020

Ethical AI isn’t the same as trustworthy AI, and that matters

Certainly, unethical systems create mistrust. It does not follow, however, that an ethical system will be categorically trusted. To further complicate things, not trusting a system doesn’t mean it won’t get used. The capabilities that underpin AI solutions – machine learning, deep learning, computer vision, and natural language processing – are not ethical or unethical, trustworthy, or untrustworthy. It is the context in which they are applied that matters. .... The scale at which an AI pundit can be deployed to spread disinformation or simply influence the opinions of human readers who may not realize the content’s origin makes this both unethical and unworthy of trust. This is true even if (and this is a big if) the AI pundit manages to not fall prey to and adopt the racist, sexist, and other untoward perspectives rife in social media today. ... Ultimately, ethics can determine whether a given AI solution sees the light of day. Trust will determine its adoption and realized value. All that said, people are strangely willing to trust with relatively little incentive. This is true even when the risks are higher than a gelatinous watermelon cookie. But regardless of the stakes, trust, once lost, is hard to regain.


As technology develops in education so does the need for cybersecurity

One of the most effective ways to boost cybersecurity in education is by adopting a proactive mentality, rather than a reactive one. Schools cannot afford to wait until an attack happens to put processes in place to defend themselves. Instead, they need to create a “cyber curriculum” that informs everyone – IT teams, teachers, and students alike – about staying secure online. This curriculum should include documentation that people can refer to at any time, guiding them on the risks and warning signs of cyber attacks, as well as best practices for smart online use. Likewise, the curriculum should include on-demand training courses, current cybersecurity news and trends, and the contact information for the people who are responsible for taking action if the network is compromised. At the same time, IT admins need to be conducting regular penetration tests and appoint a “red team” to expose possible vulnerabilities. This team should test the school’s system under realistic conditions and without warning, so as to identify weaknesses that may not be immediately obvious. 


After early boom, fintech lending startups face a reality check

Industry experts pointed out that from here on, the lending startups will exercise abundant caution. There are a couple of points playing out in the industry; first, there is availability of liquidity in the system; secondly, there is demand since consumers need credit to restart their lives. The repayment stress will continue well into 2021. Also, larger, well-capitalised players might show a higher risk appetite and grab market share next year, leading to some loss in business for fintechs, who might want to conserve capital and recover existing loans. In a report titled ‘NBFC Sector in India: A brief update post Covid’, consultancy firm Alvarez and Marsal pointed out that that 10-15 percent of the customers who opted for a moratorium could see defaults, thereby pushing up overall NPA numbers by 300-400 basis points. Around 50 percent of those who took the moratorium could opt for restructuring of their loans and lenders could see a spike in their credit costs, too, the report added. There is already a spurt in demand for gold loans, which is a secured form of personal loan. Bankers in the know pointed out that they are more comfortable giving out secured loans in the aftermath of the pandemic, given that consumers across the board could be in tough financial situations.


Infer# Brings Facebook's Infer Static Analyzer to C# and .NET

Infer is a static analysis tool open-sourced by Facebook in 2015. It supports Java and C/C++/Objective-C code and is able to detect a number of potential issues, including null pointer exceptions, resource leaks, annotation reachability, missing lock guards, and concurrency race conditions in Android and Java code; and null pointer dereferences, memory leaks, coding conventions, and unavailable API’s for languages belonging to the C-family. Infer# is not the only static analyzer available for .NET, says Microsoft senior software engineer Xin Shi. However, Infer# brings unique capabilities to the .NET platform. What sets Infer# apart is its focus on cross-function analysis, which is not found in other analyzers, and incremental analysis. PreFast detects some instances of null derereference exceptions and memory leaks, but its analysis is purely intra-procedural. Meanwhile, JetBrains Resharper heavily relies on developer annotations for its memory safety validation. ... The advantages of working from a low-level representation of the source code are twofold: first, the CIL underlies all .NET languages, and therefore InferSharp supports all .NET languages this way


DevSecOps Can Address the Challenges of Governance, Risk, Compliance (GRC)

DevOps originated within IT to meet similar performance and innovation goals. While security and compliance have always been a part of DevOps, the term DevSecOps is often used to ensure security is explicitly emphasized. Seeing DevSecOps as part of a broader GRC framework makes clear how DevSecOps serves the needs of organizations to innovate faster, maintain complete visibility and control, and effectively manage risk. GRC and DevSecOps use different tools, require different skills, follow different processes, and are emphasized by different teams. But their goals are aligned, and it’s important for both teams to appreciate this so they can collaborate effectively. DevOps specialists are often narrowly focused on process automation or improving handoffs within IT. It’s important for IT teams to appreciate their work in the broader context of serving the company’s GRC initiatives. By contrast, GRC-focused consultants and leaders need to understand DevSecOps as a complementary approach that they should encourage, not inhibit. The IT industry evolves faster than most departments in the company, so compliance officers should defer to IT teams on the most efficient methods to meet requirements. Their main role should be to emphasize the goals and requirements of GRC, and to invite creative solutions from IT. 


Best Practices for Building Offline Apps

On the flip side, some features are non-negotiable: you simply have to be connected to the internet to use them, such as location services. That’s OK! Identify what features need an internet connection, then create messages or alerts informing those features’ needs to users. Users will appreciate it because you’ll take the guesswork out of understanding your app’s limits when offline. Conflicting information can be a big problem. It may not be enough to simply merge and override all changes except the last time a user was online. Users should have a clear understanding of what happens when conflict is unavoidable. Once they know their options, they can choose what works for them. There are several solutions for handling conflicting information. It is up to you if you want the user to pick and choose which version to keep or to institute a “last write wins” rule. No matter what you choose, the app should handle this gracefully. Update too frequently and you may miss out on the benefits of being offline first. Update too slowly and you may create more conflicts than necessary and frustrate users.Knowing what to update along with when to update is an important consideration as well.


Data anonymization best practices protect sensitive data

Before developing policies and procedures, buying products or implementing manual data anonymization processes, identify all potential PII data elements in your organization. The larger your environment, the more susceptible your organization will be to storing unidentified PII data. This isn't an easy task. Most data including PII doesn't sit idle. Once a data element is created, it quickly spreads to reports, dashboards and other data stores across an organization. Ensuring a person's continuous anonymity throughout an enterprise is inherently fluid by its nature. In other words: things change. Data audits and continuous feedback from IT and business personnel that interact with PII will help to identify potential issues. From products that specifically focus on data anonymization best practices to enterprise-wide offerings that provide a wide range of data security features, there is wealth of software solutions available. The larger an organization is, the more important these tools become. Based on the amount of data your organization stores, you may need to purchase a product or two to properly identify and safeguard sensitive PII data assets. There is a broad spectrum of data anonymization products that are available. In addition, some existing data storage platforms inherently provide anonymization features.


Quarterbacking Vulnerability Remediation

As the quarterback, security teams identify the nature of the vulnerability, the business assets most at risk, the potential impact on the enterprise, and the patch, configuration change, or workaround that will resolve the breach. Armed with this knowledge, they pull in the right players from other IT functions, align on the necessary fix, and coordinate the remediation campaign, efficiently and effectively. When security and IT teams align on a remediation strategy, the shared context and agreement on execution provides the foundation needed to remediate vulnerabilities at scale. Even if the fix goes wrong, problems get resolved faster when the lines of communication are open. Fixing complex vulnerabilities often requires multiple coordinated elements. The Boothole vulnerability is an excellent example of this: Boothole's sheer pervasiveness makes it incredibly difficult to patch in enterprise settings. It's a cross-platform vulnerability that requires both hardware and software fixes — including firmware and OS updates — that must be performed in precise order. Security, DevOps, and IT teams must work together to minimize its business impact and avoid compromise. As the quarterback, the security team needs to think and act like a team captain: What's the best approach?


CIOs are facing a mental health epidemic

A degree of stress for a CIO is expected and unavoidable in any change project. However, businesses are currently failing to manage this pressure effectively. Recent independent research conducted on behalf of Firstsource found that 55% of business leaders wished they had managed the emotional marathon of change projects better. And CIOs identified the three biggest causes of stress as: 1. Not having the right mix of skills in the team; 2. Pushing too hard and harshly without taking time to celebrate wins; and 3. Resistance from key stakeholders in other divisions and countries. To support CIO’s digital transformation, the research spoke with 120 business leaders to understand how to turn challenges into catalysts for success. This resulted in the emergence of a framework with five areas that are key to managing stress and ensuring a transformation project’s success. Proactively addressing these five areas will help CIOs deliver projects that unlock the full potential of their businesses while managing the stress levels of teams. Managing the business case optimism: CIOs will always aim to keep transformation projects realistic and grounded. However, business cases are never static.


5 things CIOs want from app developers

It’s very easy for development teams to get excited chasing innovations or adding spikes around new technologies to the backlog. CIOs and IT leaders want innovation, but they are also concerned when development teams don’t address technical debt. A healthy agile backlog should show agile teams completing a balance of spikes, technical debt, new functionality, and operational improvements.  Prioritization on agile teams should fall to the product owner. But IT leaders can establish principles if product owners fail to prioritize technical debt or if they force feature priorities without considering the agile team’s recommended innovation spikes. CIO and IT leaders are also realistic and know that new implementations likely come with additional technical debt. They understand that sometimes developers must cut corners to meet deadlines or identify more efficient coding options during the implementation. It’s reasonable to expect that newly created technical debt is codified in the source code and the agile backlogs, and that teams seek to address them based on priorities. Development teams are under significant pressure to deliver features and capabilities to end-users quickly. That is certainly one reason teams create new technical debt, but it’s a poor excuse for developing code that is not maintainable or that bypasses security standards.



Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne

Daily Tech Digest - December 23, 2020

How we protect our users against the Sunburst backdoor

SolarWinds, a well-known IT managed services provider, has recently become a victim of a cyberattack. Their product Orion Platform, a solution for monitoring and managing their customers’ IT infrastructure, was compromised by threat actors. This resulted in the deployment of a custom Sunburst backdoor on the networks of more than 18,000 SolarWinds customers, with many large corporations and government entities among the victims. According to our Threat Intelligence data, the victims of this sophisticated supply-chain attack were located all around the globe: the Americas, Europe, Middle East, Africa and Asia. After the initial compromise, the attackers appear to have chosen the most valuable targets among their victims. The companies that appeared to be of special interest to the malicious actors may have been subjected to deployment of additional persistent malware. Overall, the evidence available to date suggests that the SolarWinds supply-chain attack was designed in a professional manner. The perpetrators behind the attack made it a priority to stay undetected for as long as possible: after the installation, the Sunburst malware lies dormant for an extended period of time, keeping a low profile and thwarting automated sandbox-type analysis and detection.


Why I've Been Merging Microservices Back Into The Monolith At InVision

One of the arguments in favor of creating independent services is the idea that those services can then "scale independently". Meaning, you can be more targeted in how you provision servers and databases to meet service demands. So, rather than creating massive services to scale only a portion of the functionality, you can leave some services small while independently scaling-up other services. Of all the reasons as to why independent services are a "Good Thing", this one gets used very often but is, in my (very limited) opinion, usually irrelevant. Unless a piece of functionality is CPU bound or IO bound or Memory bound, independent scalability is probably not the "ility" you have to worry about. ... If I could go back and redo our early microservice attempts, I would 100% start by focusing on all the "CPU bound" functionality first: image processing and resizing, thumbnail generation, PDF exporting, PDF importing, file versioning with rdiff, ZIP archive generation. I would have broken teams out along those boundaries, and have them create "pure" services that dealt with nothing but Inputs and Outputs (ie, no "integration databases", no "shared file systems") such that every other service could consume them while maintaining loose-coupling.


CIOs see cloud computing as the bedrock of digital transformation

The CIOs also shared their challenges and experiences during the pandemic. Responding to the big business focus area that tech and cloud will enable or drive in 2021, Chatterjee shared, “For us, it’s very clear the data and analytics piece, and all the modeling that we are doing around fraud, retention propensity, the entire claims experience, I think, across the value chain, anything that is data and insights. And I will be careful in using the term ‘analytics’, because in a lot of areas we use analysis and we incorrectly call it analytics, but the idea is, cloud will enable the entire data and insights as a capability within the organisation. This is something big for us and will be driven by the cloud.” For SonyLiv, the focus is on harnessing the use of data. “We as an organisation, are digital and are using data in each and every decision that we make, whether it is on the infrastructure side, content programming, content production, churn analysis, retention – everywhere. I think it is all about data and democratisation of the data. We are working big time on introducing some of the prediction models, machine learning models, which can help us to retain users. So, I think data is going to play a critical role. The other area which I feel we as a business, is on the OTT side.


Why Boring Tech is Best to Avoid a Microservices Mess

You need to go back to the fundamentals. One way to look at it is understanding that microservices are distributed systems, something many people will have experience with. They should also be familiar with what another panelist, Oswaldo Hernandez Perez, engineering manager at Improbable, called the first law of distribution: “If you don’t need to do it, don’t do it.” So that means focusing on why you are building what you are building. What are you trying to achieve? This is a fundamental question that’s applicable to businesses of all sizes. What problem are you trying to solve, and how will your solution remove friction from its users’ lives? That’s what people care about. Even if you’re developing a niche app for a highly technical audience, they are unlikely to care too much about how it got to them, only that it did and it is fixing a problem for them. If the only way to achieve that is with microservices, then yes, you should definitely use them. If there is an alternative, then consider using that. Do not simply start breaking everything up into microservices just because that is what everyone is currently talking about. Ultimately, microservices are an architectural pattern to reduce complexity. It does this, but it also adds complexity elsewhere. If used in isolation, then you’ll fix your complexity in one dimension and have it proliferate elsewhere.


The power of value 4.0 for industrial internet of things

Technological discussions are essential to provide a solution to a defined improvement area or challenge, but they are meaningful only after there has been a clearly defined use case with concrete and measurable value identified and captured within financial reporting systems. This means that each effort should start with an integrated value design, rather than technology. It needs to be integrated in the sense that the designed target value can be directly linked to an outcome —for example, process improvements enabled by the digital solution that generated a measurable value impact. Value and solution design need to be one integrated effort. In consequence, this also implies that use cases need to be defined bottom-up, by the operators and resources that operate production and thus realize value add, rather than top-down. Within industrial settings, implementing industry 4.0 technologies takes more time and effort, compared to applications in the consumer space, for a variety of reasons. Any industrial customer today depends on existing brownfield installations to run and operate their business—these are mostly highly complex and tailored to the targeted product. Managing this complexity manually would be a Sisyphean struggle. When industrial companies are integrating digital manufacturing and supply chain solutions with their customers, they need to continually adapt the solution stack to customer requirements.


5 Robotic Process Automation (RPA) trends to watch in 2021

Expect a sharper focus on understanding and optimizing processes as a direct result of the shift from RPA adoption to evaluation and optimization. Plenty of organizations will realize their initial efforts were stymied by processes that they didn’t fully understand or that simply weren’t good fits for RPA. Day predicts that process-focused technologies and practices – such as process mining – will gain a greater share of attention in the new year. Related terms and technologies such as process discovery, process intelligence, process optimization, and process orchestration will similarly become a bigger part of the RPA vocabulary and toolkit. And as we wrote about recently, we could see a closer relationship between business process management (BPM) and RPA going forward. “Most companies are jumping straight into RPA or trying to automate processes without first adopting process mining, which leads to more strategic deployment of RPA and a more efficient automation framework overall,” Day says. “By more closely associating RPA with process mining and process management, RPA will stand a better chance for success – and organizations will not adopt automation for automation’s sake, and instead focus on ROI and higher success rates.”


Enterprise IT Leaders Face Two Paths to AI

There will be two pathways for companies to get AI software," said Andrew Bartels, VP and principal analyst serving CIO professionals at Forrester Research. The first movers will continue to build their own for speed to market and differentiation. It's a more expensive path, but some organizations will find value in pursuing it. Meanwhile, other organizations in the future will take another pathway. "The second pathway will be to wait for existing vendors to add the relevant functionality into existing products," Bartels said. "We think over time that will be the more dominant pathway." ... Bartels offers a simple model for assessing the maturity of your vendor's AI and whether it is the right fit for the task you have. He uses the metaphor of K-12 grade school students. If a vendor says they are adding AI functionality to their roadmap, that is a pre-kindergarten level. If they are actually developing the technology, they are in kindergarten. If they have it in beta with clients, they are a third grader. If they are in production with multiple clients for a few years then they are an eighth grader. The scale continues along the same lines with more advanced work. Bartels said enterprise IT leaders need to ask themselves: "Is this a task that an eighth grader could do? Then trust an AI engine to do it. Or, is this a task we would not give to a human who did not have an equivalent of an 11th grade education?" 


Responsible Innovation Starts With Cybersecurity

Recent events heightened the need to instill the cybersecurity culture and mindset into businesses and local governments. When employees understand how to monitor, spot, and recover from threats, systems can become more resilient. Murphy says, "When I talk with folks about cybersecurity, I tell them the most important thing is to educate your employees or citizens on how they should behave and the things to watch out for. The second component is to know what's going on in your world and then be able to respond or recover your environment. Educating your employees while using AI is key." At this point, even if companies and local governments have limited resources, there are still strategies they can take to secure their environment. Murphy says, "If you don't have the money, at least do things like segment your network. There are certain design criteria that you can build for a safer environment. There are best practices that don't require capital investment. At a minimum, there's good change control and configuration management practices that must be implemented." Without a foundation of cybersecurity, innovation cannot progress.


What AI investments should businesses prioritise for Covid-19 recovery?

While there will be a whole host of systems that businesses and individuals will interact with in the future, they must be intelligent, they need to involve us, they need to sense and be able take decisions, some on their own. What this means for businesses is that while the digital presence of systems and processes will only increase, increasing their intelligence and continually enhancing them will be crucial. Therefore, we can expect the role of AI to be far more strategic than ever before, particularly as we think about emotional intelligence in the future. The beauty of this change will be greater demand for people and skills. While AI will start making systems intelligent and reduce demand on maintenance and smaller operations, the next innovations, the roadmap development, the enhancements, and emotional intelligence will require more man=power. Up to now AI investment in industry has been aimed at solving specific business challenges and driving cost reduction, now businesses really need to invest in creating an enterprise grade AI stack to responsibly scale AI across the enterprise. Ultimately, organisations need to focus on improving the end user and customer experience, using AI to drive hyper-personalisation such as conversational commerce tools.


Enterprise IoT Security Is a Supply Chain Problem

IoT devices and systems represent additional enterprise attack surface — the same as allowing users to "bring your own device" for mobile devices. These devices expose the organization to the same types of risk as other devices deployed on the corporate network. Security flaws in IoT devices can lead to device takeover and the exposure of sensitive data, and they provide attackers a foothold in the corporate network that can be used to launch additional attacks. Additionally, these IoT systems tend to traffic in a lot of sensitive data, including confidential and proprietary information, and information that has privacy implications. This data will leave the corporate firewall and be processed by services hosted by the IoT system provider and places the burden on the enterprise to understand how these IoT systems affect their risk posture. Third-party risk must be approached in a structured manner as part of an overall vendor risk management program. New IoT systems that are going to be deployed on enterprise networks and process sensitive enterprise information need to be run through a vetting process, so the organizations understand the change in risk exposure. This process can share many of the same characteristics of a standard vendor risk management program but may need to be augmented to address some of the specific concerns that IoT systems raise.



Quote for the day:

"Appreciation is a wonderful thing: It makes what is excellent in others belong to us as well." -- Voltaire

Daily Tech Digest - December 22, 2020

Up Your DevOps Game: It’s Time for NoOps

It’s time for the next approach: Limit the number of choices to create standard best-in-class operations that deliver economies of scale and easily evolve with minimal hassle. NoOps simplifies cloud operations—everyone can do things the same way. NoOps aims to “completely automate the deployment, monitoring and management of applications and the infrastructure on which they run,” according to Forrester, which coined the term. NoOps is about standardizing the approach to deployments and reducing the number of variables, bringing simplicity. At its core, NoOps is focused on automating deployments and executions that are predictable and repeatable. The development and increasing adoption of containers are critical to the entire NoOps philosophy. Containers provide the ability to independently deploy services and applications, automating and standardizing the process to deploy anything, anywhere. Using containers delivers the tremendous portability that hasn’t been seen since the development of generic hardware. With encapsulation within the container, whatever is running inside will behave the same no matter where it is deployed. The NoOps-containers movement will transform the entire DevOps industry.


Today’s Lens of Information Governance (IG)

With the increasing list of data privacy laws and regulations and because remote workforces have created greater disconnect and information silos among departments, it is even more important for organizations to not treat data privacy as a one-department task. Instead, they must work as an organization to break through organizational data silos to ensure compliance is part of the entire culture. Though no specific national privacy regulation currently exists, any nationwide rules would likely follow the standards set forth by the European Union’s General Data Protection Regulation and the California Consumer Privacy Act (CCPA). Complicating matters further, online privacy laws, which differ widely from state to state, could expose companies to potential fines, reputational risk and damages resulting from data incidents. The California attorney general, for example, can impose penalties up to $2,500 for non-willful violations and $7,500 for intentional violations of the CCPA. Other key data regulations include the Sarbanes–Oxley Act of 2002, which standardizes record management practices, and the Gramm–Leach–Bliley Act (1999), which entails financial institutions shielding the nonpublic personal information of customers.


Disaster Recovery for Multi-Region Kafka at Uber

When disaster strikes the primary region, the active-active service assigns another region to be the primary, and the surge pricing calculation fails over to another region. It’s important to note that the computation state of the Flink job is too large to be synchronously replicated between regions, and therefore its state must be computed independently from the input messages from the aggregate clusters. And a key insight from the practices is that offering reliable and multi-regional available infrastructure services like Kafka can greatly simplify the development of the business continuity plan for the applications. The application can store its state in the infrastructure layer and thus become stateless, leaving the complexity of state management, like synchronization and replication across regions, to the infrastructure services. Another multi-region consumption mode is active/passive: only one consumer (identified by a unique name) is allowed to consume from the aggregate clusters in one of the regions (i.e. the primary region) at a time. The multi-region Kafka tracks its consumption progress in the primary region, represented by the offset, and replicates the offset to other regions. So upon failure of the primary region, the active/passive mode allows the consumer to failover to another region and resume its consumption.


Here’s How IT Leaders Can Adapt to Stricter Data Privacy Laws

Data-reliant businesses like Apple and Facebook, which make billions of dollars annually off personal information, are keeping a close watch on the shifting privacy landscape. Google’s plans to eliminate third-party cookies from Chrome was a move towards ensuring consumer trust; and now many businesses and their IT teams are facing massive changes to their privacy and data collection practices. Google’s gesture is ironic seeing as the company is facing a $5B lawsuit after being accused of illegally invading the privacy of millions of users by continuously tracking internet usage through browsers set in “private” mode. Many CIOs and tech teams were initially afraid of the potential impact California’s initial CCPA would have on their businesses, especially considering the massive GDPR violations that have cost organizations upwards of $228M. Businesses and their tech teams should expect to see a continued federal push from the Biden administration to implement nationalized standards for data protection. The movement is starting to take shape with the passing of California’s new CPRA law, which gives the power of consent to consumers around how businesses manage their data. This is a big win for consumers, as nearly every major data company in the financial market has holding operations in California.


NSA Warns of Hacking Tactics That Target Cloud Resources

The warning comes after a week's worth of revelations over the SolarWinds breach that has affected government agencies as well as corporations, including Microsoft, FireEye, Intel and Nvida. Secretary of State Mike Pompeo, commenting on the breach, said in a Friday evening radio interview that "the Russians engaged in this activity." "I can't say much more as we're still unpacking precisely what it is, and I'm sure some of it will remain classified," Pompeo said, according to a transcript provided by the State Department. "But suffice it to say there was a significant effort to use a piece of third-party software to essentially embed code inside of U.S. government systems, and it now appears systems of private companies and companies and governments across the world as well. This was a very significant effort, and I think it's the case that now we can say pretty clearly that it was the Russians that engaged in this activity." In a pair of tweets on Saturday, President Donald Trump appeared to question whether Russia was involved in the hacking operation and opened up the possibility that China may have played a role. "The Cyber Hack is far greater in the Fake News Media than in actuality," Trump tweeted.


Advice for incident responders on recovery from systemic identity compromises

Once your incident responders and key personnel have a secure place to collaborate, the next step is to investigate the suspected compromised environment. Successful investigation will be a balance between getting to the bottom of every anomalous behavior to fully scope the extent of attacker activity and persistence and taking action quickly to stop any further activity on objectives by the attacker. Successful remediation requires as complete an understanding of the initial method of entry and persistence mechanisms controlled by the attacker as possible. Any persistence mechanisms missed could result in continued access by the attacker and potential for re-compromise. ... There are many ways to detect activity associated with this campaign. Exactly how your organization will detect attacker behavior depends on which security tools you have available, or choose to deploy in response. Microsoft has provided examples publicly for some of the core security products and services that we offer and are continually updating those documents as new threat intelligence is identified related to this attacker. 


What the antitrust lawsuits against big tech companies could mean for tech leaders

With the Microsoft antirust action more than 20 years in the past, perhaps the first obvious lesson that's applicable to today's tech giants is that whatever happens, it will happen slowly. Microsoft was sued in May 1998, and the settlement reached during the appeals process was approved in 2004. Much can happen in technology in six years; in fact, Google went from a university project to preparing for IPO during the full course of the Microsoft lawsuit. These companies are probably some of the few entities with the breadth and depth of legal resources to match the US government, so any action as dramatic as a forced breakup or significant restructuring of these giants that would significantly impact customers is likely years away at the earliest. In the nearer term, however, expect the tech giants to launch significant marketing efforts to polish up their public appearances and present themselves as champions of consumers and unwitting victims of government overreach. This campaign to generate goodwill may manifest itself in more transparent contractual terms, lower pricing, or more transparency for customers, benefits that will likely come available for little more than mentioning that you're concerned about the potential outcome of these lawsuits.


Data’s Gender Gap: How to Address Data’s Gender Gap

It is not enough to simply leave positions open to those of different genders (and races, sexual orientations, abilities, etc.), we must intentionally seek out those with different backgrounds to fill them. If the majority of those working on a team are men, a woman may feel unwelcome in that space. She might question what kind of workplace culture led to an all-male team, and if her contributions might be second-guessed by others due to her gender. When only one or a handful of women are present in a workplace, they may feel tokenized. By deliberately recruiting a representative population of women, an organization is showing a base level of commitment to welcoming and including people with different viewpoints and genders. According to LinkedIn’s 2018 Gender Insights Report, women apply to 20% fewer postings than men while on a job hunt. It is not certain whether this is simply due to women being more selective and particular than men in their job hunt, or if they are less likely to apply to a listing they do not precisely fit the requirements for than men. Either way, recruiters can make an effort to seek out women with backgrounds that sound intriguing for the positions they are hiring, and ask those they know to refer non-male candidates they believe would be up for the job.


The stakeholder–shareholder debate is over

CEOs are now becoming more like politicians, who have to be prepared to answer questions on just about any aspect of society. That’s a sharp departure for chief executives, whose compasses were previously pointed in a fixed direction toward shareholders. “The role is evolving, and it’s going to require a different kind of intelligence and greater situational awareness,” said George Barrett, former chairman and chief executive of Cardinal Health. “The job requires managing multiple levers. It used to be that most of these levers were behind the scenes. They were operational. There were a couple of stakeholders who had big, loud voices, and leaders tended to focus on managing them. Today, everything is louder, and leaders must be attentive to more engaged stakeholders. That requires a pretty skillful hand.” Chip Bergh, CEO of Levi Strauss, echoed Barrett’s insights: “You have to navigate all the different stakeholders and do the right thing. You also have to decide where you draw the line. Where do you weigh in? Because if you stand for everything, you stand for nothing. So we pick our spots about when we comment, and sometimes those are tough calls.”


Do You Think Like a Lawyer, a Scientist, or an Engineer?

Scientific thinking is an entirely different form of logical analysis. The challenge in science is not to follow the rules or define the rules; the challenge is to discover them. In any truly scientific investigation, we do not know the rules in advance. To discover the rules, we use observation and inference. This contrasts strongly with the IRAC method of logical analysis. The scientific method emphasizes intellectual humility, treating knowledge as layers of hypotheses. Accumulating new knowledge requires designing and running experiments to test new hypotheses. A hypothesis is an idea about what rules may govern a certain situation. Designing an experiment means imagining how a system would behave if a certain rule holds true. Running an experiment means carrying out a scenario to see if the results matched your expectations. In the scientific method, you validate your mental model against observed results. If results match your expectations, it gives confidence that the hidden rules match your hypothesis. The defining characteristic of the scientific method is building systems that enable us to learn. Learning underlying rules (while holding our knowledge of them as tentative) is the product of this exercise.



Quote for the day:

"Preconceived notions are the locks on the door to wisdom." -- Mary Browne

Daily Tech Digest - December 21, 2020

Building Trust with Centralized Data Access

As businesses continue to find ways to use, monetize, and aggregate data, they need to effectively share their data in a way that’s more secure than an email and more scalable than sending a thumb drive by courier. They also need methods to use data more efficiently. In particular, businesses that are exploring ML and AI solutions need to look to data trusts to provide these solutions at scale, because the tedious overhead of data prep required to fuel these solutions can derail projects entirely. Data trusts are also a logical next step for any government or government institution looking to achieve greater transparency and drive innovation. After all, a data trust is primarily a vehicle for securely collecting and disseminating public, private, and proprietary information. Government data systems are complex; data trusts are a useful tool that can be used to synthesize, standardize, and audit data that is generated or used internally. The key difference between the value that data trusts bring for businesses is to increase data use within the organization, whereas for governments it is primarily used to audit data assets and better understand internal data environments. 


Five ways COVID-19 will change cybersecurity

Next year, CISOs will have to grapple with the consequences of the decisions they made (or were forced to make) in 2020. One of their first orders of business will be to “un-cut” the corners they took in the spring to stand up remote work capabilities. We’re already starting to see this trend play out, with zero trust – an emerging security mindset that treats everything as hostile, including the network, host, applications, and services – gaining in traction: in November, 60 percent of organizations reported that they were accelerating zero trust projects. That’s due in no small part to CISOs and CSOs retrenching and taking a more deliberate approach to ensuring operational security. The security leaders who help their organizations successfully navigate the zero trust journey will recognize that a zero trust mindset has to incorporate a holistic suite of capabilities including, but not limited to: strong multifactor authentication, comprehensive identity governance and lifecycle, and effective threat detection and response fueled through comprehensive visibility across all key digital assets. To address the increasing digital complexity induced by digital transformation, effective security leaders will embrace the notion of extended detection and response (XDR), striving for unified visibility across their networks, endpoints, cloud assets, and digital identities.


Stop the Insanity: Eliminating Data Infrastructure Sprawl

There are so many projects going on that navigating the tangle is pretty difficult. In the past, you generally had a few commercial options. Now, there might be tens or hundreds of options to choose from. You end up having to narrow it down to a few choices based on limited time and information.  Database technology in particular has seen this problem mushroom in recent years. It used to be you had a small number of choices: Oracle, Microsoft SQL Server, and IBM DB2 as the proprietary choices, or MySQL if you wanted a free and open source choice. Then, two trends matured: NoSQL, and the rise of open source as a model. The number of choices grew tremendously. In addition, as cloud vendors are trying to differentiate, they have each added both NoSQL databases and their own flavors of relational (or SQL) databases. AWS has more than 10 database offerings; Azure and GCP each have more than five flavors. ... If you’re building a new solution, you have to decide what data architecture you need. Even if you assume the requirements are clear and fixed – which is almost never the case – navigating the bewildering set of choices as to which database to use is pretty hard. You need to assess requirements across a broad set of dimensions – such as functionality, performance, security, and support options – to determine which ones meet your needs.


Agility for business — championing customer expectations in 2021

2020 has shown that remote working isn’t just possible for many traditionally office-based industries such as customer service, but also sometimes preferable. It has given many employees a better way to structure their workday and work/life balance while ensuring they stay protected. In 2021, flexible working models will continue to become more prominent. Businesses and their customer experience teams will therefore need to dynamically manage employees and anticipate different working scenarios — remote work, in the office, off-shore, on-shore, in-house or outsourced — and enable them to deliver service across multiple channels. This means managers must be equipped with the tools to address an agile workforce divergence. The workforce must be effectively and efficiently managed as agents work across any channel and from any location. Also, as digital tools continue to increase in prominence, a robotic workforce will need to be managed together with customer service employees as one integrated workforce. By embracing and adapting to these new working conditions, businesses will be better placed to maintain customer service levels whatever the circumstance.


FireEye: SolarWinds Hack 'Genuinely Impacted' 50 Victims

Microsoft on Thursday disclosed that it too was hacked, but says there are no signs that its software was either Trojanized or used to infect anyone else. On Friday, Palo Alto, California-based VMware said it was also a victim of the supply chain attack. "While we have identified limited instances of the vulnerable SolarWinds Orion software in our own internal environment, our own internal investigation has not revealed any indication of exploitation," VMware said in a statement. FireEye's Mandia said in his Sunday interview that the SolarWinds Orion code was altered in October 2019, but that the backdoor wasn't added until March. An unnamed source with knowledge of the investigation told Yahoo News that last October's effort appeared to be a "dry run," adding that the attackers' caution suggested that they were "a little bit more disciplined and deliberate" than the average attacker. Investigators say the attack appears to have been launched by Russia as part of a cyber espionage operation, and potentially by Moscow's SVR foreign intelligence service. U.S. Secretary of State Mike Pompeo on Friday said in a radio interview that "we can say pretty clearly that it was the Russians." On Saturday, President Donald Trump attempted to downplay Pompeo's remarks.


Why Quantum Computing's Future Lies in the Cloud

The current generation of Noisy Intermediate-Scale Quantum (NISQ) computers are large, temperamental, and complicated to maintain, said Konstantinos Karagiannis, an associate director at business, finance, and technology consulting firm Protiviti. They are also very expensive and likely to be rapidly outdated, he added. Karagiannis, like most other sector experts, believes that the enterprise path to quantum computing access is more likely to go through the cloud than the data center. "Providing cloud access to quantum computers ... allows researchers and companies worldwide to share these systems and contribute to both academia and industry," he said. "As more powerful systems come online, the cloud approach is likely to become a significant revenue source [for service providers], with users paying for access to NISQ systems that can solve real-world problems." The limited lifespans of rapidly advancing quantum computing systems also favors cloud providers. "Developers are still early along in hardware development, so there's little incentive for a user to buy hardware that will soon be made obsolete," explained Lewie Roberts, a senior researcher at Lux Research. "This is also part of why so many large cloud players ... are researching quantum computing," Roberts noted. "It would nicely augment their existing cloud services," he added.


Microsoft Finds Backdoor; CISA Warns of New Attack Vectors

The hacking campaign involved slipping malicious backdoors into software updates for SolarWinds' popular network management software called Orion. Once those updates were installed by organizations, the attackers had free-ranging access to networks and could install other malware and access data, such as email accounts. Orion has powerful, administrative access, says John Bambenek, chief forensic examiner and president of Bambenek Consulting and an incident handler at the SANS Institute."Owning SolarWinds is effectively owning the CIO," Bambenek says. "You've got the infrastructure. You don't need a special tool to sit there and change passwords or create accounts or spin up new VMs [virtual machines]. It's all built in, and you've got full access." As many as 18,000 organizations downloaded the infected updates, SolarWinds has said. But experts believe the hacking group likely only deeply penetrated a few dozen organizations, with many in the U.S. government sphere. The U.S. Cybersecurity and Infrastructure Security Agency warned Thursday, however, that the SolarWinds compromise "is not the only initial infection vector this actor leveraged."


Demystifying Master Data Management

For master data to fuel MDM, it must be organized into relevant business schemas. Reference data, imported from multiple customers, needs to be made relevant to work activities, (e.g. automate account processing, from the example above). Humans intervene with this reference data and add new data or transform it into an information product (e.g. adding transactions to invoices, matching bills). The data transformation throughout the company needs to work within the larger business context, including enhancing the reference data. When customers view the final information (e.g. that bills have been paid), the reference data used throughout the production process needs to be made available. MDM provides the framework needed to move and use raw master data. Since MDM involves a complete 360-degree business view, all company departments contribute to conception of the master data context. What may be relevant information to one business department may not be to another and may not relate to the master data context. Listing what comprises master data, including reference data, and the systems that generate master data, gives a picture toward integrating master data between other systems, throughout the entire business. But this is only a start. Providing cross-organizational commitment to the master data’s relevancy and guidance to its contextual structure becomes critical. A Data Governance program fills this need.


Hackers Use Mobile Emulators to Steal Millions

"This mobile fraud operation managed to automate the process of accessing accounts, initiating a transaction, receiving and stealing a second factor - SMS in this case - and in many cases using those codes to complete illicit transactions," according to IBM. "The data sources, scripts and customized applications the gang created flowed in one automated process, which provided speed that allowed them to rob millions of dollars from each victimized bank within a matter of days." ... They then connected to the account through a matching VPN service, according to the report. The attackers also could bypass protections, such as multifactor authentication, because they already had access to the victims' SMS messages. "A key takeaway here is that mobile malware has graduated to a fully automated process that should raise concern across the global financial services sector," Kesem says. "We have never seen a comparable operation in the past, and the same gang is likely bound to repeat these attacks. But they are also already being offered 'as-a-service' via underground venues to other cybercriminals. We also suspect that these scaled, sporadic attacks are going to become a more common way cybercriminals target banks and their customers through the mobile banking channel in 2021."


How artificial intelligence can drive your climate change strategy

From a business perspective, there is a strong connection between sustainability and business benefits, with nearly 80% of executives pointing to an increase in customer loyalty as a key benefit from sustainability initiatives. Over two thirds (69%) pointed to an increase in brand value. The impact of sustainability credentials on brand value and sales is supported by our consumer research: if consumers perceive that the brands they are buying from are not environmentally sustainable or socially responsible, 70% tell their friends and family about the experience and urge them not to interact with the organisation. The research found that 68% of the organisations also cited improvement in environmental, social and governance (ESG) ratings of their organisation driven by sustainability initiatives, with nearly 63% of organisations saying that sustainability initiatives have helped boost revenues. Another high-impact industry which we are seeing adapt to the new world order is the automotive sector. Automotive and mobility companies worldwide are facing increasing pressure from both consumers and government regulators to prioritise their sustainability efforts. We’re seeing a fundamental potential for a shift in approach as consumers adopt new, greener and more flexible approaches to getting from A to B.



Quote for the day:

"I say luck is when an opportunity comes along and you're prepared for it." -- Denzel Washington

Daily Tech Digest - December 20, 2020

What Is a Minimum Viable AI Product?

Most organizations don’t want to use a separate AI application, so a new solution should allow easy integration with existing systems of record, typically through an application programming interface. This allows AI solutions to plug into existing data records and combine with transactional systems, reducing the need for behavior change. Zylotech, another Glasswing company, applies this principle to its self-learning B2B customer data platform. The company integrates client data across existing platforms; enriches it with a proprietary data set about what clients have browsed and bought elsewhere; and provides intelligent insights and recommendations about next best actions for clients’ marketing, sales, data, and customer teams. It is designed specifically to directly complement clients’ existing software suites, minimizing adoption friction. Another integration example is Verusen, an inventory optimization platform also in the Glasswing portfolio. Given the existence of large, entrenched enterprise resource planning players in the market, it was essential for the platform to integrate with such systems. It gathers existing inventory data and provides its AI-generated recommendations on how to connect disparate data and forecast future inventory needs without requiring significant user behavior change.


Half of 4 Million Public Docker Hub Images Found to Have Critical Vulnerabilities

A recent analysis of around 4 million Docker Hub images by cyber security firm Prevasio found that 51% of the images had exploitable vulnerabilities. A large number of these were cryptocurrency miners, both open and hidden, and 6432 of the images had malware. Prevasio’s team performed both static and dynamic analysis of the images. Static scanning includes dependency analysis, which checks the dependency graph of the software present in the image for published vulnerabilities. In addition to this, Prevasio's team also performed dynamic scanning - i.e. running containers from the images and monitoring their runtime behaviour. The report groups images into vulnerable ones as well as malicious ones. Almost 51% of the images had critical vulnerabilities that could be exploited, and 68% of images were vulnerable in various degrees. 0.16%, or 6432 of the analyzed images had malicious software in them. Windows images, which accounted for 1% of the total, and images without tags, were excluded from the analysis. Earlier this year, Aqua Security’s cyber-security team uncovered a new technique where attackers were building malicious images directly on misconfigured hosts.


Getting Started— A Coder’s Guide to Neural Networks

The world is talking so much about machine learning and AI, but hardly anyone seems to know how it works, but then, on the flip side, everyone makes it seem like they’re experts on it. The unfortunate truth is that the knowledge and know-how seem to be stuck with the academic elites. For the most part, the material online for learning about machine learning and deep learning falls into 1 of 3 categories: shallow tutorials with barely any explanation on why certain patterns are followed; copy and paste material by those who want to pretend to have a self-made portfolio; or such intimidating math heavy lessons, that you get lost in all the Greek. This book was written to get away from all of that. It’s meant to be a very easy read which walks the reader through a journey on the fundamentals of neural networks. This books purpose is to get the knowledge out of the hands of the few and bring it into the hands of any coder. Before continuing, let’s clear something up. From an outsider’s perspective, the world of AI consists of so many terms which seem to mean the same thing. Machine learning, deep learning, artificial intelligence, neural networks. Why are there so many seemingly synonymous terms? Let’s take a look at the diagram below.


Here's how opinions on the impact of artificial intelligence differ around the world

Views of AI are generally positive among the Asian publics surveyed: About two-thirds or more in Singapore (72%), South Korea (69%), India (67%), Taiwan (66%) and Japan (65%) say AI has been a good thing for society. Many places in Asia have emerged as world leaders in AI. Most other places surveyed fall short of a majority saying AI has been good for society. In France, for example, views are particularly negative: Just 37% say AI has been good for society, compared with 47% who say it has been bad for society. In the U.S. and UK, about as many say it has been a good thing for society as a bad thing. By contrast, Sweden and Spain are among a handful of places outside of the Asia-Pacific region where a majority (60%) views AI in a positive light. As with AI, Asian publics surveyed stand out for their relatively positive views of the impact of job automation. Many Asian publics have made major strides in the development of robotics and AI. The South Korean and Singaporean manufacturing industries, for instance, have the highest and second highest robot density of anywhere in the world.


Artificial Intelligence In The New Normal – Attitude Of Public In India

There have been several discussions in the society about AI posing a threat to humanity and our way of living and working. In our study, 42% of the public believe that the impact of AI on net new jobs created will depend on the industry, and on balance feel that, overall, more new jobs will be created than lost (Net score 1%). 63% of the public feel that humans will always be more intelligent that AI systems. One puzzling trend that emerges in the study is about how the youngsters perceive AI. Those in age category less than 40 (Net score -8%) are relatively less optimistic that net new jobs will be created as compared to those aged greater than 40 (Net score 14%). Further, respondents aged less than 40 are 3 times less confident than those aged more than 40 that human intelligence will not be overtaken by AI. What explains this apparent diffidence among the youth? Or I wonder if they are being more prescient than the others about reaching singularity! I believe there is a need for appropriate education and communication strategies for the youth in India about AI and its positive potential. The public in India demonstrate a sense of optimism about the future in the new normal and believe in science and technology to make their lives better.


Data governance in the FinTech sector: A growing need

The neobanking model is another FinTech model that has seen significant traction globally. In India, neobanks primarily operate in partnership with one or multiple banking partner(s). This leads to sharing of data between the two entities for multiple banking services provided to consumers. To ensure regulated usage and security of customer data shared by banks with neobanks and vice versa, proper data security and access guidelines would need to be in place. Other FinTech segments, including payments and WealthTech, also require strong DG frameworks to ensure compliance both within the organisation and across its partners. In recent times, the industry has seen the introduction of several data-related laws and regulations aimed at ensuring the privacy and security of an individual’s PII and sensitive data. Some of the key focus areas include data sharing, data usage, consent and an individual’s data rights. Hence, there is increasing pressure on companies to remain compliant while adopting rapidly evolving FinTech models. Considering the changing regulatory landscape and requirements, some FinTech companies have already performed readiness assessments and have started to adopt an enterprise DG framework that would help them ensure effective data management ...


Ten Essential Building Blocks of a Successful Enterprise Architecture Program

The danger (or maybe, in some cases, the opportunities) for EAs is that they may be expected to be conversant in any type of architecture. In other words, some organizations may only hire one EA and expect her to be able to do any kind of architecture work except that of licensed architects. If one considers that EA work could be very different in, say, government organizations compared to for profit or non-profit ones, then one could imagine specialized EAs (e.g., Government Enterprise Architect, Non-Profit, Conglomerate Architect, etc.) that requires specialized training and experience. In fact, there has been general recognition that doing EA in government can be quite different from in profit-driven enterprises and therefore special frameworks training for government-centric EA may be appropriate. Nonetheless, the leading generic, openly available EA framework for professional certification is The Open Group Architecture Framework (TOGAF), which, with expert assistance, can be adapted to incorporate elements of both DODAF and the FEA Framework (FEAF). With so many frameworks, methods, and standards to choose from, why is customization always required?


How Data Governance Can Improve Corporate Performance

While data governance is a systematic methodology for businesses to comply with external regulations such as GDPR, HIPAA, Sarbanes-Oxley, and future regulations, it can also establish a foundation and controls to strengthen internal decision-making for determining product costs, inventory, consumer demand, and more. While there are many factors to consider for building a data governance program, two of the most pressing items that should be top of mind are data quality and self-service analytics. It’s advantageous to include efforts to ensure data quality is part of your data governance program. Trying to govern data that is old, corrupted or duplicated can become quite messy. Although the tools for managing quality and governance are generally different, data governance provides a framework for data quality. Poor data quality exists for many reasons, such as having data spread out in department silos, different versions of the “same” data or information lacking in common name identifiers. Without data quality, organizations also face a real possibility of making faulty business decisions and having a sub-standard governance program. Generally, the more data governance a company has, the stronger its data quality will be.


5 Steps to Success in Data Governance Programs

What exactly does a successful data governance program look like? Author Bhansali (2014) defines data governance as “the systematic management of information to achieve objectives that are clearly aligned with and contribute to the organization’s objectives” (p.9). So, a successful data governance program is one that achieves these aligned objectives and furthers the interests of the organization to which it is applied. In our reading for this week (Bhansali, 2014) outlined several key steps in the creation of data governance platforms. These steps are by no means an exhaustive road-map for a perfect data governance platform, nor are they necessarily chronological. Still, they do provide a launching point for useful discussion. A data governance program must be aligned with any existing business strategies. This also involves being aware of the vision of the future that guides and defines the business. If Apple were the company under consideration, you might think of their vision being an iPhone in the pocket of every person on earth. Create a clear and logical model of the data governance process that is specific to your organization. This model should stand apart from any products or technologies created by the company and must be based on any key processes or standards


Application Level Encryption for Software Architects

Unless well-defined, the task for application-level encryption is frequently underestimated, poorly implemented, and results in haphazard architectural compromises when developers find out that integrating a cryptographic library or service is just the tip of the iceberg. Whoever is formally assigned with the job of implementing encryption-based data protection, faces thousands of pages of documentation on how to implement things better, but very little on how to design things correctly. Design exercises turn out to be a bumpy ride every time you don’t expect the need for design and have a sequence of ad-hoc decisions because you anticipated getting things done quickly: First, you face key model and cryptosystem choice challenges, which hide under “which library/tool should I use for this?” Hopefully, you chose a tool that fits your use-case security-wise, not the one with the most stars on GitHub. Hopefully, it contains only secure and modern cryptographic decisions. Hopefully, it will be compatible with other team’s choices when the encryption has to span several applications/platforms; Then you face key storage and access challenges: where to store the encryption keys, how to separate them from data, what are integration points where the components and data meet for encryption/decryption, what is the trust/risk level toward these components?; 



Quote for the day:

"No one reaches a high position without daring." -- Publilius Syrus