Daily Tech Digest - August 17, 2020

Remote DevOps is here to stay!

With a mass exodus of the workforce towards a home setting, especially in India, the demand for skilled professionals in DevOps has dramatically increased. A recent GitHub report, on the implications of COVID on the developer community, suggests that developer activities have increased as compared to last year. This also translates to the fact that developers have shown resilience and continued to contribute, undeterred by the crisis. This is the shining moment for DevOps which is built for remote operations. In a ‘choose your own adventure’ situation, DevOps helps organizations evaluate their own goals, skills, bottlenecks, and blockers to curate a modern application development and deployment process that works for them. As per an UpGuard report, on DevOps Stats for Doubters, 63% organizations that implemented DevOps experienced improvement in the quality of their software deployments. Delivering business value from data is contingent on the developers’ ability to innovate through methods like DevOps. It is about deploying the right foundation for modern application development across both public and private clouds. The current environment is uncharted territory for many enterprises. 


Breaking Down Serverless Anti-Patterns

The goal of building with serverless is to dissect the business logic in a manner that results in independent and highly decoupled functions. This, however, is easier said than done, and often developers may run into scenarios where libraries or business logic or, or even just basic code has to be shared between functions. Thus leading to a form of dependency and coupling that works against the serverless architecture. Functions depending on one another with a shared code base and logic leads to an array of problems. The most prominent is that it hampers scalability. As your systems scale and functions are constantly reliant on one another, there is an increased risk of errors, downtime, and latency. The entire premise of microservices was to avoid these issues. Additionally, one of the selling points of serverless is its scalability. By coupling functions together via shared logic and codebase, the system is detrimental not only in terms of microservices but also according to the core value of serverless scalability. This can be visualized in the image below, as a change in the data logic of function A will lead to necessary changes in how data is communicated and processed in function B. Even function C may be affected depending on the exact use case.


Why Service Meshes Are Security Tools

Modern engineering organizations need to give individual developers the freedom to choose what components they use in applications as well as how to manage their own workflows. At the same time, enterprises need to ensure that there are consistent ways to manage how all of the parts of an application communicate inside the app as well as with external dependencies. A service mesh provides a uniform interface between services. Because it’s attached as a sidecar acting as a micro-dataplane for every component within the service mesh, it can add encryption and access controls to communication to and from services, even if neither are natively supported by that service. Just as importantly, the service mesh can be configured and controlled centrally. Individual developers don’t have to set up encryption or configure access controls; security teams can establish organization-wide security policies and enforce them automatically with the service mesh. Developers get to use whatever components they need and aren’t slowed down by security considerations. Security teams can make sure encryption and access controls are configured appropriately, without depending on developers at all. 


Review: AWS Bottlerocket vs. Google Container-Optimized OS

To isolate containers, Bottlerocket uses container control groups (cgroups) and kernel namespaces for isolation between containers running on the system. eBPF (enhanced Berkeley Packet Filter) is used to further isolate containers and to verify container code that requires low-level system access. The eBPF secure mode prohibits pointer arithmetic, traces I/O, and restricts the kernel functions the container has access to. The attack surface is reduced by running all services in containers. While a container might be compromised, it’s less likely the entire system will be breached, due to container isolation. Updates are automatically applied when running the Amazon-supplied edition of Bottlerocket via a Kubernetes operator that comes installed with the OS.  An immutable root filesystem, which creates a hash of the root filesystem blocks and relies on a verified boot path using dm-verity, ensures that the system binaries haven’t been tampered with. The configuration is stateless and /etc/ is mounted on a RAM disk. When running on AWS, configuration is accomplished with the API and these settings are persisted across reboots, as they come from file templates within the AWS infrastructure.


Microsoft tells Windows 10 users they can never uninstall Edge. Wait, what?

Microsoft explained it was migrating all Windows users from the old Edge to the new one. The update added: "The new version of Microsoft Edge gives users full control over importing personal data from the legacy version of Microsoft Edge." Hurrah, I hear you cry. That's surely holier than Google. Microsoft really cares. Yet next were these words: "The new version of Microsoft Edge is included in a Windows system update, so the option to uninstall it or use the legacy version of Microsoft Edge will no longer be available." Those prone to annoyance would cry: "What does it take not only to force a product onto a customer but then make sure that they can never get rid of that product, even if they want to? Even cable companies ultimately discovered that customers find ways out." Yet, as my colleague Ed Bott helpfully pointed out, there's a reason you can't uninstall Edge. Well, initially. It's the only way you can download the browser you actually want to use. You can, therefore, hide Edge -- it's not difficult -- but not completely eliminate it from your life. Actually that's not strictly true either. The tech world houses many large and twisted brains. They don't only work at Microsoft. Some immediately suggested methods to get your legacy Edge back on Windows 10. Here's one way to do it.


Digital public services: How to achieve fast transformation at scale

For most public services, digital reimagination can significantly enhance the user experience. Forms, for example, can require less data and pull information directly from government databases. Texts or push notifications can use simpler language. Users can upload documents as scans. In addition, agencies can link touchpoints within a single user journey and offer digital status notifications. Implementing all of these changes is no trivial matter and requires numerous actors to collaborate. Several public authorities are usually involved, each of which owns different touchpoints on the user journey. The number of actors increases exponentially when local governments are responsible for service delivery. Often, legal frameworks must be amended to permit digitization, meaning that the relevant regulator needs to be involved. Yet when governments use established waterfall approaches to project management (in which each step depends on the results of the previous step), digitization can take a long time and the results often fall short. In many cases, long and expensive projects have delivered solutions that users have failed to adopt.


State-backed hacking, cyber deterrence, and the need for international norms

The issue of how cyber attack attribution should be handled and confirmed also deserves to be addressed. Dr. Yannakogeorgos says that, while attribution of cyber attacks is definitely not as clear-cut as seeing smoke coming out of a gun in the real world, with the robust law enforcement, public private partnerships, cyber threat intelligence firms, and information sharing via ISACs, the US has come a long way in terms of not only figuring out who conducted criminal activity in cyberspace, but arresting global networks of cyber criminals as well. Granted, things get trickier when these actors are working for or on behalf of a nation-state. “If these activities are part of a covert operation, then by definition the government will have done all it can for its actions to be ‘plausibly deniable.’ This is true for activities outside of cyberspace as well. Nations can point fingers at each other, and present evidence. The accused can deny and say the accusations are based on fabrications,” he explained. “However, at least within the United States, we’ve developed a very robust analytic framework for attribution that can eliminate reasonable doubt amongst friends and allies, and can send a clear signal to planners on the opposing side...."


Tackling Bias and Explainability in Automated Machine Learning

At a minimum, users need to understand the risk of bias in their data set because much of the bias in model building can be human bias. That doesn't mean just throwing out variables, which, if done incorrectly, can lead to additional issues. Research in bias and explainability has grown in importance recently and tools are starting to reach the market to help. For instance, the AI Fairness 360 (AIF360) project, launched by IBM, provides open source bias mitigation algorithms developed by the research community. These include bias mitigation algorithms to help in the pre-processing, in-processing, and post-processing stages of machine learning. In other words, the algorithms operate over the data to identify and treat bias. Vendors, including SAS, DataRobot, and H20.ai, are providing features in their tools that help explain model output. One example is a bar chart that ranks a feature's impact. That makes it easier to tell what features are important in the model. Vendors such as H20.ai provide three kinds of output that help with explainability and bias. These include feature importance as well as Shapely partial dependence plots (e.g., how much a feature value contributed to the prediction) and disparate impact analysis. Disparate impact analysis quantitatively measures the adverse treatment of protected classes.


Chief Data Analytics Officers – The Key to Data-Driven Success?

Core to the role is the experience and desire to use data to solve real business problems. Combining an overarching view of the data across the organisation, with a well-articulated data strategy, the CDAO is uniquely placed to balance specific needs for data against wider corporate goals. They should be laser-focused on extracting value from the bank’s data assets and ‘connecting-the-dots’ for others. By seeing and effectively communicating the links between different data and understanding how it can be combined to deliver business benefit, the CDAO does what no other role can do: bring the right data from across the business, plus the expertise of data scientists, to bear on every opportunity. Balance is critical. Leveraging their understanding of analytics and data quality, the CDAO can bring confidence to business leaders afraid to engage with data. They understand governance, and so can police which data can be used for innovation and which is business critical and ‘untouchable.’ They can deploy and manage data scientists to ensure they are focused on real business issues not pet analytics projects. Innovation-focused CDAOs will actively look for ways to generate returns on data assets, and to partner with commercial units to create new revenue from data insights.


How the network can support zero trust

One broad principle of zero trust is least privilege, which is granting individuals access to just enough resources to carry out their jobs and nothing more. One way to accomplish this is network segmentation, which breaks the network into unconnected sections based on authentication, trust, user role, and topology. If implemented effectively, it can isolate a host on a segment and minimize its lateral or east–west communications, thereby limiting the "blast radius" of collateral damage if a host is compromised. Because hosts and applications can reach only the limited resources they are authorized to access, segmentation prevents attackers from gaining a foothold into the rest of the network. Entities are granted access and authorized to access resources based on context: who an individual is, what device is being used to access the network, where it is located, how it is communicating and why access is needed. There are other methods of enforcing segmentation. One of the oldest is physical separation in which physically separate networks with their own dedicated servers, cables and network devices are set up for different levels of security. While this is a tried-and-true method, it can be costly to build completely separate environments for each user's trust level and role.



Quote for the day:

"Gratitude is the place where all dreams come true. You have to get there before they do." -- Jim Carrey

Daily Tech Digest - August 16, 2020

When to use Java as a Data Scientist

When you are responsible for building an end-to-end data product, you are essentially building a data pipeline where data is fetched from a source, features are calculated based on the retrieved data, a model is applied to the resulting feature vector or tensor, and the model results are stored or streamed to another system. While Python is great for modeling training and there’s tools for model serving, it only covers a subset of the steps in this pipeline. This is where Java really shines, because it is the language used to implement many of the most commonly used tools for building data pipelines including Apache Hadoop, Apache Kafka, Apache Beam, and Apache Flink. If you are responsible for building the data retrieval and data aggregating portions of a data product, then Java provides a wide range of tools. Also, getting hands on with Java means that you will build experience with the programming language used by many big data projects. My preferred tool for implementing these steps in a data workflow is Cloud Dataflow, which is based on Apache Beam. While many tools for data pipelines support multiple runtime languages, there many be significantly performance differences between the Java and Python options.


Alert: Russian Hackers Deploying Linux Malware

Analysts have linked Drovorub to the Russian hackers working for the GRU, the alert states, noting that the command-and-control infrastructure associated with this campaign had previously been used by the Fancy Bear group. An IP address linked to a 2019 Fancy Bear campaign is also associated with the Drovorub malware activity, according to the report. The Drovorub toolkit has several components, including a toolset consisting of an implant module coupled with a kernel module rootkit, a file transfer and port forwarding tool as well as a command-and-control server. All this is designed to gain a foothold in the network to create the backdoor and exfiltrate data, according to the alert. "When deployed on a victim machine, the Drovorub implant (client) provides the capability for direct communications with actor-controlled [command-and-control] infrastructure; file download and upload capabilities; execution of arbitrary commands as 'root'; and port forwarding of network traffic to other hosts on the network," according to the alert. Steve Grobman, CTO at the security firm McAfee, notes that the rootkit associated with Drovorub can allow hackers to plant the malware within a system and avoid detection, making it a useful tool for cyberespionage or election interference.


How Community-Driven Analytics Promotes Data Literacy in Enterprises

Data is deeply integrated into the business processes of nearly every company precisely because it is helping us make better decisions and not because of its ability to hasten lofty things, such as digital transformation. The C-suite sees the advantages data insights provide and as a result, non-technical employees are increasingly expected to be more technically adept at extraction and interpretation of data. Successful organizations foster a community of data curious teams and empower them with a single platform that enables everyone, regardless of technical ability, to explore, analyze and share data. Furthermore, domain experts and business leaders must be able to generate their own content, build off of content created by others and promote high-value, trustworthy content, while also demoting old, inaccurate, or unused content. This should resemble an active peer review process where helpful content is promoted and bad content is flagged as such by the community, while simultaneously being managed and governed by the data team.


The Anatomy of a SaaS Attack: Catching and Investigating Threats with AI

SaaS solutions have been an entry point for cyber-attackers for some time – but little attention is given to how the Techniques, Tools & Procedures (TTPs) in SaaS attacks differ significantly from traditional TTPs seen in networks and endpoint attacks. This raises a number of questions for security experts: how do you create meaningful detections in SaaS environments that don’t have endpoint or network data? How can you investigate threats in a SaaS environment? What does a ‘good’ SaaS environment look like as opposed to one that’s threatening? A global shortage in cyber skills already creates problems for finding security analysts able to work in traditional IT environments – hiring security experts with SaaS domain knowledge is all the more challenging. ... A more intricate and effective approach to SaaS security requires an understanding of the dynamic individual behind the account. SaaS applications are fundamentally platforms for humans to communicate – allowing them to exchange and store ideas and information. Abnormal, threatening behavior is therefore impossible to detect without a nuanced understanding of those unique individuals: where and when do they typically access a SaaS account, which files are they like to access, who do they typically connect with? 


How to maximise your cloud computing investment

“At the core of the issue is that with a conventional, router-centric approach, access to applications residing in the cloud means traversing unnecessary hops through the HQ data centre, resulting in inefficient use of bandwidth, additional cost, added latency and potentially lower productivity,” said Pamplin. “To fully realise the potential of cloud, organisations must look to a business-driven networking model to achieve greater agility and substantial CAPEX and OPEX savings. “When it comes to cloud usage, a business-driven network model should also give clear application visibility through a single pane of glass, or else organisations will be in the dark regarding their application performance and, ultimately, their return on investment. “Only through utilisation of advanced networking solutions, where application policies are centrally defined based on business intent, and users are connected securely and directly to applications wherever they reside, can the benefits of the cloud be truly realised. “A business-driven approach eliminates the extra hops and risk of security compromises. This ensures optimal and cost-efficient cloud usage, as applications will be able to run smoothly while fully supported by the network. ..."


AI Needs To Learn Multi-Intent For Computers To Show Empathy

Wael ElRifai, VP for solution engineering at Hitachi Vantara reminds us that teaching a chatbot multi-intent is a more manual process than we’d like to believe. He says that its core will be actions like telling the software to search for keywords such as “end” or “and”, which act as connectors for independent clauses, breaking down a multiple intent query into multiple single-intent queries and then using traditional techniques. “Deciphering intent is far more complex than just language interpretation. As humans, we know language is imbued with all kinds of nuances and contextual inferences. And actually, humans aren’t that great at expressing intent, either. Therein lies the real challenge for developers,” said ElRifai.  ... “In many cases, that’s what you need, but when we look more broadly at the kinds of problems that businesses face, across many different industries, the vast majority of problems actually don’t follow that ‘one thing well’ model all that well. Many of the things we’d like to automate are more like puzzles to be solved, where we need to take in lots of different kinds of data, reason about them and then test out potential solutions,” said IBM’s Cox.


Code Obfuscation: A Comprehensive Guide Towards Securing Your Code

Since code obfuscation brings about deep changes in the code structure, it may bring about a significant change in the performance of the application as well. In general, rename obfuscation hardly impacts performance, since it is only the variables, methods, and class which are renamed. On the other hand, control-flow obfuscation does have an impact on code performance. Adding meaningless control loops to make the code hard to follow often adds overhead on the existing codebase, which makes it an essential feature to implement, but with abundant caution. A rule of thumb in code obfuscation is that more the number of techniques applied to the original code, more time will be consumed in deobfuscation. Depending on the techniques and contextualization, the impact on code performance usually varies from 10 percent to 80 percent. Hence, potency and resilience, the factors discussed above, should become the guiding principles in code obfuscation as any kind of obfuscation (except rename obfuscation) has an opportunity cost. Most of the obfuscation techniques discussed above do place a premium on the code performance, and it is up to the development and security professionals to pick and choose techniques best suited for their applications.


Designing a High-throughput, Real-time Network Traffic Analyzer

Run-to-completion is a design concept which aims to finish the processing of an element as soon as possible, avoiding infrastructure-related interferences such as passing data over queues, obtaining and releasing locks, etc. As a data-plane component, sensitive to latency, the Behemoth’s (and some supplementary components) design relies on that concept. This means that, once a packet is diverted into the app, its whole processing is done in a single thread (worker), on a dedicated CPU core. Each worker is responsible for the entire mitigation flow – pulling the traffic from a NIC, matching it to a policy, analyzing it, enforcing the policy on it, and, assuming it’s a legit packet, returning it back to the very same NIC. This design results in great performance and negligible latency, but has the obvious disadvantage of a somewhat messy architecture, since each worker is responsible for multiple tasks. Once we’d decided that AnalyticsRT would not be an integral “station” in the traffic data-plane, we gained the luxury of using a pipeline model, in which the real-time objects “travel” between different threads (in parallel), each one responsible for different tasks.


RASP A Must-Have Thing to Protect the Mobile Applications

The concept of RASP is found to be very much effective because it helps in dealing with the application layer attacks. The concept also allows us to deal with custom triggers so that critical components or never compromised in the business. The development team should also focus on the skeptical approach about implementing the security solutions so that impact is never adverse. The implementation of these kinds of solutions will also help to consume minimal resources and will ensure that overall goals are very well met and there is the least negative impact on the performance of the application. Convincing the stakeholders was a very great issue for the organizations but with the implementation of RASP solutions, the concept has become very much easy because it has to provide mobile-friendly services. Now convincing the stakeholders is no more a hassle because it has to provide clear-cut visibility of the applications along with the handling of security threats so that working of solutions in the background can be undertaken very easily. The implementation of this concept is proven to be a game-changer in the company and helps to provide several aspects so that companies can satisfy their consumers very well. The companies can use several kinds of approaches which can include binary instrumentation, virtualization, and several other things.


Cyber Adversaries Are Exploiting the Global Pandemic at Enormous Scale

For cyber adversaries, the development of exploits at-scale and the distribution of those exploits via legitimate and malicious hacking tools continue to take time. Even though 2020 looks to be on pace to shatter the number of published vulnerabilities in a single year, vulnerabilities from this year also have the lowest rate of exploitation ever recorded in the 20-year history of the CVE List. Interestingly, vulnerabilities from 2018 claim the highest exploitation prevalence (65%), yet more than a quarter of firms registered attempts to exploit CVEs from 15 years earlier in 2004. Exploit attempts against several consumer-grade routers and IoT devices were at the top of the list for IPS detections. While some of these exploits target newer vulnerabilities, a surprising number targeted exploits first discovered in 2014 – an indication the criminals are looking for exploits that still exist in home networks to use as a springboard into the corporate network. In addition, Mirai (2016) and Gh0st (2009) dominated the most prevalent botnet detections, driven by an apparent growing interest by attackers targeting older vulnerabilities in consumer IoT products.



Quote for the day:

"Nothing is so potent as the silent influence of a good example." -- James Kent

Daily Tech Digest - August 15, 2020

Quantum Computing: What Does It Mean For AI (Artificial Intelligence)?

Roughly speaking, AI and ML are good ways to ask a computer to provide an answer to a problem based on some past experience. It might be challenging to tell a computer what a cat is, for instance. Still, if you show a neural network enough images of cats and tell it they are cats, then the computer will be able to correctly identify other cats that it did not see before. It appears that some of the most prominent and widely used AI and ML algorithms can be sped-up significantly if run on quantum computers. For some algorithms we are even anticipate exponential speed-ups, which clearly does not mean performing a task faster, but rather turning a previously impossible task and making it possible, or even easy. While the potential is undoubtedly immense, this still remains to be proven and realized with hardware. ... One of the areas being looked at currently is in the area of artificial intelligence within financial trading. Quantum physics is probabilistic, meaning the outcomes constitute a predicted distribution. In certain classes of problems, where outcomes are governed by unintuitive and surprising relationships among the different input factors, quantum computers have the potential to better predict that distribution thereby leading to a more correct answer.


Help Reinforce Privacy Through the Lens of GDPR

There are several key questions about GDPR compliance which delivery teams should consider. Where do you start on the GDPR compliance journey? What GDPR TOM controls apply to project delivery and how can your team implement them? What are the solution design guidelines for applicable GDPR TOMs? And, what GDPR compliance evidence do you need to show? Initial concern on the first anniversary (May 2019) of GDPR has faded. The second anniversary (May 2020) is the beginning of the enforcement wave. Delivery teams play a key role in that enforcement. To answer the above questions, let us first understand the compliance elements across the people, process and technology pillars and view the compliance model through a delivery team lens. ... The GDPR compliance model hooks the elements of people, process and technology into the delivery lifecycle phases. By doing this, it addresses delivery teams’ concerns about achieving and showing GDPR compliance. It provides the guidelines for the inclusion of GDPR TOMs in a project lifecycle. Below is a sample compliance model that demonstrates how a client can integrate the compliance elements into the delivery lifecycle phases. 


For six months, security researchers have secretly distributed an Emotet vaccine across the world

Through trial and error and thanks to subsequent Emotet updates that refined how the new persistence mechanism worked, Quinn was able to put together a tiny PowerShell script that exploited the registry key mechanism to crash Emotet itself. The script, cleverly named EmoCrash, effectively scanned a user's computer and generated a correct -- but malformed -- Emotet registry key. When Quinn tried to purposely infect a clean computer with Emotet, the malformed registry key triggered a buffer overflow in Emotet's code and crashed the malware, effectively preventing users from getting infected. When Quinn ran EmoCrash on computers already infected with Emotet, the script would replace the good registry key with the malformed one, and when Emotet would re-check the registry key, the malware would crash as well, preventing infected hosts from communicating with the Emotet command-and-control server. Effectively, Quinn had created both an Emotet vaccine and killswitch at the same time. But the researcher said the best part happened after the crashes. "Two crash logs would appear with event ID 1000 and 1001, which could be used to identify endpoints with disabled and dead Emotet binaries," Quinn said.


Ambiguous times are no time for ambiguous leadership

The nearly overnight rush to remote working has had clear benefits. It reduces the wear and tear of commuting for both people and the planet. It can also give employees more of a feeling of control over their lives, and, when geography is no longer a consideration, companies can find new opportunities for hiring talent. But if remote working is going to work, leaders have to communicate more and be extra vigilant about removing as much ambiguity as they can from their exchanges with staff, particularly in email, in which the recipients don’t have the benefit of hearing the sender’s tone. Leaders have to ensure that what is clear to them is also clear to others, in language that doesn’t leave people scratching their heads. The same is true for video meetings, conducted in small squares on your computer screen that can make it hard to read nuances of body language. There are some basic rules of human nature at play here. One of them is that with less face-to-face contact with bosses, employees are more likely to feel free-floating anxiety and wonder, “What do they think of me?” They may study email as if they were amateur archaeologists, searching for hidden meaning, often when none exists.


A reference architecture for multicloud

Data-focused multicloud deals with everything that’s stored inside and outside of the public clouds. Cloud-native databases exist here, as do legacy databases that still remain on-premises. The idea is to manage these systems using common layers, such as management and monitoring, security, and abstraction.  Service-focused multicloud means that we deal with behavior/services and the data bound to those services from the lower layers of the architecture. It’s pretty much the same general idea as data-focused multicloud, in that we develop and manage services using common layers of technology that span from the clouds back to the enterprise data center. Of course, there is much more to both layers. Remember that the objective is to remove humans from having to deal with the cloud and noncloud complexity using automation and other approaches. This is the core objective of multicloud complexity management, and it seems to be growing in popularity as a rising number of enterprises get bogged down by manual ops processes and traditional tools. Also note that this diagram depicts a multicloud that has very little to do with clouds, as I covered a few weeks ago.


5 Essential Business-Oriented Critical Thinking Skills For Data Science

What do we want to optimize for? Most businesses fail to answer this simple question. Every business problem is a little different and should, therefore, be optimized differently. For example, a website owner might ask you to optimize for daily active users. Daily active users is a metric defined as the number of people who open a product in a given day. But is that the right metric? Probably not! In reality, it’s just a vanity metric, meaning one that makes you look good but doesn’t serve any purpose when it comes to actionability. This metric will always increase if you are spending marketing dollars across various channels to bring more and more customers to your site. Instead, I would recommend optimizing the percentage of users that are active to get a better idea of how my product is performing. A big marketing campaign might bring a lot of users to my site, but if only a few of them convert to active, the marketing campaign was a failure and my site stickiness factor is very low. You can measure the stickiness by the second metric and not the first one. If the percentage of active users is increasing, that must mean that they like my website.


How ClauseMatch is disrupting regulatory compliance through AI

Inevitably, the RegTech sector was still in its early days and beset with challenges for Likhoded. He says banks were far from embracing cloud technologies and preferred to use traditional methods in its operations. As he put it: “Seven or eight years ago, not a single bank had departments working in innovation and technology. “Initially it was challenging to [convince] large financial institutions to use cloud platforms for their confidential and internal documentation,” he continues. While financial institutions were not easy to embrace technology, the times were a-changin, he says. “Since 2014 we’ve seen a major shift and it’s been driven by the increase in adoption of cloud technologies which were now cheaper and faster to deploy,” Likhoded adds. As a result, in 2016 ClauseMatch signed up Barclays as a client. This deal propelled the startup’s growth and various other institutions started leveraging its regulation and compliance function. Having tier one banks as clients proved advantageous when it came to funding as VCs were able to see the use case for the RegTech startup. “[VCs] saw that regulations are not reducing but increasing and compliance departments have ballooned in size. 


Answers To Today’s Toughest Endpoint Security Questions In The Enterprise

What’s important for CISOs to think about today is how they can lead their organizations to excel at automated endpoint hygiene. It’s about achieving a stronger endpoint security posture in the face of growing threats. Losing access to an endpoint doesn’t have to end badly; you can still have options to protect every device. It’s time for enterprises to start taking a more resilient-driven mindset and strategy to protecting every endpoint – focus on eliminating dark endpoints. One of the most proven ways to do that is to have endpoint security embedded to the BIOS level every day. That way, each device is still protected to the local level. Using geolocation, it’s possible to “see” a device when it comes online and promptly brick it if it’s been lost or stolen. ... What CISOs and their teams need is the ability to see endpoints in near real-time and predict which ones are most likely to fail at compliance. Using a cloud-based or SaaS console to track compliance down to the BIOS level removes all uncertainty of compliance. Enterprises doing this today stay in compliance with HIPAA, GDPR, PCI, SOX and other compliance requirements at scale.


Recover your Data from a Back-up, not with a Ransom

Securing against ransomware must consequently be top of the agenda for not only IT leaders but also the c-suite executives in an organization. Endpoint security and end user education are important elements of a multi-pronged strategy to protect against ransomware, but data back-up is perhaps the key here. Given the persistence of cybercriminals, ransomware attacks are being perpetrated over a longer period and have taken the form of cyberattack campaigns. The chances of them succeeding have also grown manifold. A fragmented approach to data security adds to the risk. For instance, data protection and cybersecurity are two important elements that are intermeshed, but typically handled by two different teams. Lack of coordination between the two creates a disjointed view of the data security big picture in an organization. An integrated cybersecurity and data protection strategy is key to closing the security gap and ensuring various pieces of the data security puzzle fit together. But what if the unthinkable happens and a ransomware attack succeeds in penetrating these security layers? A Business Continuity and Disaster Recovery (BCDR) plan alongside effective cybersecurity is key in case of an inevitable attack.


MLops: The rise of machine learning operations

As a software developer, you know that completing the version of an application and deploying it to production isn’t trivial. But an even greater challenge begins once the application reaches production. End-users expect regular enhancements, and the underlying infrastructure, platforms, and libraries require patching and maintenance. Now let’s shift to the scientific world where questions lead to multiple hypotheses and repetitive experimentation. You learned in science class to maintain a log of these experiments and track the journey of tweaking different variables from one experiment to the next. Experimentation leads to improved results, and documenting the journey helps convince peers that you’ve explored all the variables and that results are reproducible. Data scientists experimenting with machine learning models must incorporate disciplines from both software development and scientific research. Machine learning models are software code developed in languages such as Python and R, constructed with TensorFlow, PyTorch, or other machine learning libraries, run on platforms such as Apache Spark, and deployed to cloud infrastructure. The development and support of machine learning models require significant experimentation and optimization, and data scientists must prove the accuracy of their models.



Quote for the day:

"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche

Daily Tech Digest - August 14, 2020

Secure at every step: A guide to DevSecOps, shifting left, and GitOps

In practice, to hold teams accountable for what they develop, processes need to shift left to earlier in the development lifecycle, where development teams are. By moving steps like testing, including security testing, from a final gate at deployment time to an earlier step, fewer mistakes are made, and developers can move more quickly. The principles of shifting left also apply to security, not only to operations. It’s critical to prevent breaches before they can affect users, and to move quickly to address newly discovered security vulnerabilities and fix them. Instead of security acting as a gate, integrating it into every step of the development lifecycle allows your development team to catch issues earlier. A developer-centric approach means they can stay in context and respond to issues as they code, not days later at deployment, or months later from a penetration test report. Shifting left is a process change, but it isn’t a single control or specific tool—it’s about making all of security more developer-centric, and giving developers security feedback where they are. In practice, developers work with code and in Git, so as a result, we’re seeing more security controls being applied in Git.


Resilience in Deep Systems

As your system grows, the connections between microservices become more complex. Communicating in a fault-tolerant way, and keeping the data that is moving between services consistent and fresh becomes a huge challenge. Sometimes microservices must communicate in a synchronous way. However, using synchronous communications, like REST, across the entire deep system makes the various components in the chain very tightly coupled to each other. It creates an increased dependency on the network’s reliability. Also, every microservice in the chain needs to be fully available to avoid data inconsistency, or worse, system outage if one of the links in a microservices chain is down. In reality, we found that such a deep system behaves more like a monolith, or more precisely a distributed monolith, which prevents the full benefits of microservices from being enjoyed. Using an asynchronous, event-driven architecture enables your microservices to publish fresh data updates to other microservices. Unlike synchronous communication, adding more subscribers to the data is easy and will not hammer the publisher service with more traffic.


Security Jobs With a Future -- And Ones on the Way Out

"The jobs aren't the same as two or three years ago," he acknowledges. "The types of skill sets employers are looking for is evolving rapidly." Three factors have led the evolution, O'Malley says. The first, of course, is COVID-19 and the sudden need for large-scale remote workforces. "Through this we are seeing a need for people who understand zero-trust work environments," he says. "Job titles around knowing VPN [technology] and how to enable remote work with the understanding that everyone should be considered an outsider [are gaining popularity]." The next trend is cloud computing. With more organizations putting their workloads in public and private clouds, they've become less interested in hardware expertise and want people who understand the tech's complex IT infrastructure. A bigger focus on business resiliency is the third major trend. The know-how needed here emphasizes technologies that make a network more intelligent and enable it to learn how to protect itself. Think: automation, artificial intelligence, and machine learning. The Edge asked around about which titles and skills security hiring managers are interested in today. 


Agile FAQ: Get started with these Agile basics

The Agile Manifesto prioritizes working software over comprehensive documentation -- though don't ignore the latter completely. This is an Agile FAQ for newcomers and experienced practitioners alike, as many people mistakenly think they should avoid comprehensive documentation in Agile. The Agile team should produce software documentation. Project managers and teams should determine what kind of documentation will deliver the most value. Product documentation, for example, helps customers understand, use and troubleshoot the product. Process documentation represents all of the information about planning, development and release. Similarly, Agile requirements are difficult to gather, as they change frequently, but they're still valuable. Rather than set firm requirements at the start of a project, developers change requirements during a project to best suit customer wishes and needs. Agile teams iterate regularly, and they should likewise adapt requirements accordingly. ... When developers start a new project, it can be hard to estimate how long each piece of the project will take. Agile teams can typically gauge how complex or difficult a requirement will be to fulfill, relative to the other requirements.


Facebook’s new A.I. takes image recognition to a whole new level

This might seem a strange piece of research for Facebook to focus on. Better news feed algorithms? Sure. New ways of suggesting brands or content you could be interested in interacting with? Certainly. But turning 2D images into 3D ones? This doesn’t immediately seem like the kind of research you’d expect a social media giant to be investing. But it is — even if there’s no immediate plan to turn this into a user-facing feature on Facebook. For the past seven years, Facebook has been working to establish itself as a leading presence in the field of artificial intelligence. In 2013, Yann LeCun, one of the world’s foremost authorities on deep learning, took a job at Facebook to do A.I. on a scale that would be almost impossible in 99% of the world’s A.I. labs. Since then, Facebook has expanded its A.I. division — called FAIR (Facebook A.I. Research) — all over the world. Today, it dedicates 300 full-time engineers and scientists to the goal of coming up with the cool artificial intelligence tech of the future. It has FAIR offices in Seattle, Pittsburgh, Menlo Park, New York, Montreal, Boston, Paris, London, and Tel Aviv, Israel — all staffed by some of the top researchers in the field.


Honeywell Wants To Show What Quantum Computing Can Do For The World

The companies that understand the potential impact of quantum computing on their industries, are already looking at what it would take to introduce this new computing capability into their existing processes and what they need to adjust or develop from scratch, according to Uttley. These companies will be ready for the shift from “emergent” to “classically impractical” which is going to be “a binary moment,” and they will be able “to take advantage of it immediately.” The last stage of the quantum evolution will be classically impossible—"you couldn’t in the timeframe of the universe do this computation on a classical best-performing supercomputer that you can on a quantum computer,” says Uttley. He mentions quantum chemistry, machine learning, optimization challenges (warehouse routing, aircraft maintenance) as applications that will benefit from quantum computing. But “what shows the most promise right now are hybrid [resources]—“you do just one thing, very efficiently, on a quantum computer,” and run the other parts of the algorithm or calculation on a classical computer. Uttley predicts that “for the foreseeable future we will see co-processing,” combining the power of today’s computers with the power of emerging quantum computing solutions.


Data Prep for Machine Learning: Encoding

Data preparation for ML is deceptive because the process is conceptually easy. However, there are many steps, and each step is much more complicated than you might expect if you're new to ML. This article explains the eighth and ninth steps ... Other Data Science Lab articles explain the other seven steps. The data preparation series of articles can be found here. The tasks ... are usually not followed in a strictly sequential order. You often have to backtrack and jump around to different tasks. But it's a good idea to follow the steps shown in order as much as possible. For example, it's better to normalize data before encoding because encoding generates many additional numeric columns which makes it a bit more complicated to normalize the original numeric data. ... A complete explanation of the many different types of data encoding would literally require an entire book on the subject. But there are a few encoding techniques that are used in the majority of ML problem scenarios. Understanding these few key techniques will allow you to understand the less-common techniques if you encounter them. In most situations, predictor variables that have three or more possible values are encoded using one-hot encoding, also called 1-of-N or 1-of-C encoding.


NIST Issues Final Guidance on 'Zero Trust' Architecture

NIST notes that zero trust is not a stand-alone architecture that can be implemented all at once. Instead, it's an evolving concept that cuts across all aspects of IT. "Zero trust is the term for an evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets and resources," according to the guidelines document. "Transitioning to [zero trust architecture] is a journey concerning how an organization evaluates risk in its mission and cannot simply be accomplished with a wholesale replacement of technology." Rose notes that to implement zero trust, organizations need to delve deeper into workflows and ask such questions as: How are systems used? Who can access them? Why are they accessing them? Under what circumstances are they accessing them? "You're building a security architecture and a set of policies by bringing in more sources of information about how to design those policies. ... It's a more holistic approach to security," Rose says. Because the zero trust concept is relatively new, NIST is not offering a list of best practices, Rose says. Organizations that want to adopt this concept should start with a risk-based analysis, he stresses. 


Compliance in a Connected World

Early threat detection and response is clearly part of the answer to protecting increasingly connected networks, because without threat, the risk, even to a vulnerable network, is low. However, ensuring the network is not vulnerable to adversaries in the first place is the assurance that many SOCs are striving for. Indeed, one cannot achieve the highest level of security without the other. Even with increased capacity in your SOC to review cyber security practices and carry out regular audits, the amount of information garnered and its accuracy, is still at risk of being far too overwhelming for most teams to cope with. For many organisations the answers lie in accurate audit automation and the powerful analysis of aggregated diagnostics data. This enables frequent enterprise-wide auditing to be carried out without the need for skilled network assessors to be undertaking repetitive, time consuming tasks which are prone to error. Instead, accurate detection and diagnostics data can be analysed via a SIEM or SOAR dashboard, which allows assessors to group, classify and prioritise vulnerabilities for fixes which can be implemented by a skilled professional, or automatically via a playbook. 


The biggest data breach fines, penalties and settlements so far

GDPR fines are like buses: You wait ages for one and then two show up at the same time. Just days after a record fine for British Airways, the ICO issued a second massive fine over a data breach. Marriott International was fined £99 million [~$124 million] after payment information, names, addresses, phone numbers, email addresses and passport numbers of up to 500 million customers were compromised. The source of the breach was Marriott's Starwood subsidiary; attackers were thought to be on the Starwood network for up to four years and some three after it was bought by Marriott in 2015. According to the ICO’s statement, Marriott “failed to undertake sufficient due diligence when it bought Starwood and should also have done more to secure its systems.” Marriott CEO Arne Sorenson said the company was “disappointed” with the fine and plans to contest the penalty.  The hotel chain was also fined 1.5 million Lira (~$265,000) by the Turkish data protection authority — not under the GDPR legislation — for the beach, highlighting how one breach can result in multiple fines globally.



Quote for the day:

"Making the development of people an equal partner with performance is a decision you make." -- Ken Blanchard

Daily Tech Digest - August 13, 2020

Building a Banking Infrastructure with Microservices

On the whole, the goal is to make engineers autonomous as much as possible for organising their domain into the structure of the microservices they write and support. As a Platform Team, we provide knowledge and documentation and tooling to support that. Each microservice has an associated owning team and they are responsible for the health of their services. When a service moves owners, other responsibilities like alerts and code review also move over automatically. ... Code generation starts from the very beginning of a service. An engineer will use a generator to create the skeleton structure of their service. This will generate all the required folder structure as well as write boilerplate code so things like the RPC server are well configured and have appropriate metrics. Engineers can then define aspects like their RPC interface and use a code generator to generate implementation stubs of their RPC calls. Small reductions in cognitive overhead for engineers allows them to cumulatively focus on business choices and reduces the paradox of choice. We do find cases where engineers need to deviate. That’s absolutely okay; our goal is not to prescribe this structure for every single service. We allow engineers to make the choice, with the knowledge that deviations need appropriate documentation/justification and knowledge transfer.


Cybersecurity Skills Gap Worsens, Fueled by Lack of Career Development

The fundamental causes for the skill gap are myriad, starting with a lack of training and career-development opportunities. About 68 percent of the cybersecurity professionals surveyed by ESG/ISSA said they don’t have a well-defined career path, and basic growth activities, such as finding mentor, getting basic cybersecurity certifications, taking on cybersecurity internships and joining a professional organization, are missing steps in their endeavors. The survey also found that many professionals start out in IT, and find themselves working in cybersecurity without a complete skill set.  ... The COVID-19 pandemic is not helping matters on this front: “Increasingly, lockdown has driven us all online and the training industry has been somewhat slow to respond with engaging, practical training supported by skilled practitioners who can share their expertise,” Steve Durbin, managing director of the Information Security Forum, told Threatpost. “Apprenticeships, on the job learning, backed up with support training packages are the way to go to tackle head on a shortage that is not going to go away.”


The Top 10 Digital Transformation Trends Of 2020: A Post Covid-19 Assessment

Using big data and analytics has always been on a steady growth trajectory and then COVID-19 exploded and made the need for data even greater. Companies and institutions like Johns Hopkins and SAS created COVID-19 health dashboards that compiled data from a myriad of sources to help governments and businesses make decisions to protect citizens, employees, and other stakeholders. Now, as businesses are in re-opening phases, we are using data and analytics for contact tracing and to help make other decisions in the workplace. There have been recent announcements from several big tech companies including Microsoft, HPE, Oracle, Cisco and Salesforce focusing on developing data driven tools to help bring employees back to work safely — some even offering it for free to its customers. The need for data to make all business decisions has grown, but this year, we saw data analytics being used in real time to make critical business and life-saving decisions, and I am certain it won’t stop there. I expect massive continued investment from companies into data and analytics capabilities that power faster, leaner and smarter organizations in the wake of 2020’s Global Pandemic and economic strains.


How government policies are harming the IT sector | Opinion

Thanks to a series of misplaced policy choices, the government has systematically eroded the permitted operations of the Indian outsourcing industry to the point where it is no longer globally competitive. Foremost among these are the telecom regulations imposed on a category of companies broadly known as Other Service Providers (OSPs). Anyone who provides “application services” is an OSP and the term “application services” is defined to mean “tele-banking, telemedicine, tele-education, tele-trading, e-commerce, call centres, network operation centres and other IT-enabled services”. When it was first introduced, these regulations were supposed to apply to the traditional outsourcing industry, focusing primarily on call centre operations. However, it has, over the years been interpreted far more widely than originally intended. While OSPs do not require a license to operate, they do have to comply with a number of telecom restrictions. The central regulatory philosophy behind these restrictions is the government’s insistence that voice calls terminated in an OSP facility over the regular Public Switched Telephone Network (PSTN) must be kept from intermingling with those carried over the data network. 


Data science's ongoing battle to quell bias in machine learning

Data bias is tricky because it can arise from so many different things. As you have keyed into, there should be initial considerations of how the data is being collected and processed to see if there are operational or process oversight fixes that exist that could prevent human bias from entering in the data creation phase. The next thing I like to look at is data imbalances between classes, features, etc. Oftentimes, models can be flagged as treating one group unfairly, but the reason is there is not a large enough population of that class to really know for certain. Obviously, we shouldn't use models on people when there's not enough information about them to make good decisions. ... Machine learning interpretability [is about] how transparent model architectures are and increasing how intuitive and understandable machine learning models can be. It is one of the components that we believe makes up the larger picture of responsible AI. Put simply, it's really hard to mitigate risks you don't understand, which is why this work is so critical. By using things like feature importance, Shapley values, surrogate decision trees, we are able to paint a really good picture of why the model came to the conclusion it did -- and if the reason it came to that conclusion violates regulatory rules or makes common business sense.


Integration Testing ASP.NET Core Applications - Best Practices

Compared to unit tests, this allows much more of the application code to be tested together, which can rapidly validate the end-to-end behaviour of your service. These are also sometimes referred to as functional tests, since the definition of integration testing may be applied to more comprehensive multi-service testing as well. It’s entirely possible to test your applications in concert with their dependencies, such as databases or other APIs they expect to call. In the course, I show how boundaries can be defined using fakes to test your application without external dependencies, which allows your tests to be run locally during development. Of course, you can avoid such fakes to test with some real dependencies as well. This form of in-memory testing can then easily be expanded to broader testing of multiple services as part of CI/CD workflows. Producing these courses is a lot of work, but that effort is rewarded when people view the course and hopefully leave with new skills to apply in their work. If you have a subscription to Pluralsight already, I hope you’ll add this course to your bookmarks for future viewing.


How Robotic Process Automation (RPA) and digital transformation work together

RPA is not on its own an intelligent solution. As Everest Group explains in its RPA primer, “RPA is a deterministic solution, the outcome of which is known; used mostly for transactional activities and standardized processes.” Some common RPA use cases include order processing, financial report generation, IT support, and data aggregation and reconciliation. However, as organizations proceed along their digital transformation journeys, the fact that many RPA solutions are beginning to integrate cognitive capabilities increases their value proposition. For example, RPA might be coupled with intelligent character recognition (ICR) and optical character recognition (OCR). Contact center RPA applications might incorporate natural language processing (NLP) and natural language generation (NLG) to enable chatbots. “These are all elements of an intelligent automation continuum that allow a digital transformation,” Wagner says. “RPA is one piece of a lengthy continuum of intelligent automation technologies that, used together and in an integrated manner, can very dramatically change the operational cost and speed of an organization while also enhancing compliance and reducing costly errors.”


It’s not about cloud vs edge, it’s about connections

“What is wanted is a new type of networking platform that establishes a reliable, high performance, zero trust connection across the Internet — meaning one that will only connect an authorised device and authorised user using an authorised application (ie ‘zero trust’),” he said. “With zero trust, every connection is continuously assessed to identify who or what is requesting access, have they properly authenticated, and are they authorised to use the resource or service being requested — before any network access is permitted. “This can be achieved using software defined networking loaded into the edge device or embedding networking capabilities into applications with SDKs and APIs. This eliminates the need to procure, install and commissioning hardware. Unlike VPNs, these software-defined connections can be tightly segmented according to company policies (policy based access), determining which workgroups or devices can be connected, and what they can share and how. “This suggests a new paradigm: an edge-core-cloud continuum, where apps and services will run wherever most needed, connected via zero trust networking access (ZTNA) capable of securing the edge to cloud continuum end to end...."


Put Value Creation at the Center of Your Transformation

The leader of any transformation effort needs to be resilient and determined to deliver the program’s full potential. Yet that person also needs to understand and acknowledge the needs of employees during a radical upheaval. Sometimes leaders must be pragmatic—particularly when the company’s long-term survival is at stake. At other times, empathy and flexibility are more effective. One CEO brought determination and conviction to the company’s transformation, and he was able to tamp down dissent, gossip, and negative press. He was also willing to reverse his decisions on some matters. For example, one cost-reduction measure was a cutback in employee travel. Initially, the CEO told employees that they needed direct approval from him for any travel expenses above a certain amount. However, after about a year, he relaxed this policy after considering employees’ feedback. ... Transformations are a proving ground for leadership teams. They can be catalysts to long-term business success and financial performance—but companies undergoing a transformation underperform almost as often as they outperform. Our analysis shows that there is a systematic way to increase the odds of success.


The sinking fortunes of the shipping box

Another surprising problem for the global manufacturing model is that shipping has actually become less efficient, largely due to business decisions of the shippers. Maersk, the world-leading Danish firm, continued to order ever-larger container ships after the financial crisis, convinced that consumer demand would quickly resume its previous growth. When it did not, the firm and its competitors were forced to sail half-full megaships around the world. Because the ships were several meters wider than their predecessors, the process of removing containers took longer. And they were designed to travel more slowly to conserve fuel. Delays became much more common, undermining trust in the industry. Without reliable shipping, Levinson writes, firms have chosen to hold more inventory — which flies in the face of the prevailing orthodoxy. But things have changed. Inventories can act as a buffer when supply chains are in distress. For firms, “minimizing production costs was no longer the sole priority; making sure the goods were available when needed ranked just as highly.” It seems inevitable that the coronavirus pandemic will reinforce this drift back toward greater self-sufficiency in manufacturing.



Quote for the day:

"To be successful you have to be lucky, or a little mad, or very talented, or find yourself in a rapid growth field." -- Edward de Bono

Daily Tech Digest - August 12, 2020

Can behavioural banking drive financial literacy and inclusion?

In good times, the need to improve financial literacy is widely accepted by banking industry leaders and consumers alike. This important topic is regularly discussed by experts at the World Economic Forum and built into initiatives sponsored by the United Nations. Regarded as an economic good, financial literacy is critical to achieving financial inclusion. What about now, in decidedly less-than-good times? How are banks prepared to promote financial literacy for millennials and especially Gen Z, as they face a world in financial turmoil? ... The right systems helped the bank get up and running just 18 months after its initial launch announcement. Powerful, reliable technology also helped the company create a customer onboarding application that can open a new account within just five minutes. “The technology is extremely important for us,” says Frey. “It has to be fast, agile, and robust. We needed a solid workhorse with a huge amount of flexibility at the configuration level.” In 2020, Discovery will begin looking for ways to incorporate rapidly developing technologies such as artificial intelligence and machine learning into its solutions. Most important, however, is listening to customers and ensuring that the bank delivers the most pleasant, rewarding experience possible.


With DevOps, security is everybody’s responsibility. OK, so what’s next?

DevSecOps solutions are by nature designed to be preventative. The idea is to remove complexity by baking robust security methodologies into software development from the earliest stages. Get it right from the outset, and reactive firefighting is greatly reduced. Conveniently, this model – “shifting security left” to the coder rather than the expert in a fixed hierarchy – also makes sense when developing on cloud platforms that assume rapid deployment and collaboration. There is no development team, security team, or IT deployment team because they are one and the same person. In theory, that’s how security misconfigurations can be caught before they do harm. However, when it comes to cloud development, “shift left” is more talked about than practised. This situation has crept up on organisations that haven’t realised how programming culture has changed rapidly in the cloud era. “There is a lack of control in this model. With the shift into cloud development and the fact that coders can always get a better answer of Stack Overflow and GitHub, it’s become practically impossible to track the supply chain. It’s a governance problem,” says Guy Eisenkot


Surface Duo: Microsoft's $1,400 dual-screen Android phone coming September 10

Microsoft is counting on users seeing the Duo as filling an untapped niche. But for people used to thinking about carrying no more than two devices -- usually a PC/tablet or phone -- where does the Duo fit? In its first iteration, with a seemingly mediocre 11 MP camera, an older Snapdragon 855 processor and a relatively heavy form factor (about half a pound), the Duo is not going to replace my Pixel 3XL Android phone. And with a total screen size when open of 8.1 inches, the Duo is just too small to replace my PC. Panay and team are touting the Duo as a device that will give people a better way to get things done, to create and to connect. As was the case with the currently postponed, Windows 10X-based Surface Neo device, Microsoft's contention is two separate screens connected via a hinge help people work smarter and faster than they could with a single screen of any size. Officials say they've got research and years of work that backs up this claim. I do think more screen is better for almost everything, but for now, I am having trouble buying the idea that a hinge/division in the middle of two screens is going to make any kind of magic happen in my brain.


The clear Sky strategy

You need to have your eyes to the horizon and your feet on the floor. At all times. And it’s quite a discipline to do that. You see a lot of people who are consumed about managing the now, and then if you look at the last few months, there’s not been a lot of forward thinking. Then you also see other people who, perhaps the longer they are in their roles, spend more and more time thinking about the future horizon. That’s all very alluring and appealing, but they disconnect with the immediacy of what’s important today. You must try to think of both of those things and also encourage everybody else to think of their own role in that way. So, if you’re in broadcast technology today and you’re running that function or department, how do you get your colleagues to look at the future broadcast technologies and at the same time equip people to shoot with their iPhones and get the news out quickly? What you end up with is this networked brain. Everybody in Sky should be thinking about where the company should go, but also “How do I personally make sure I’m doing what is needed?”


Did Intel fail to protect proprietary secrets, or misconfigure servers? Lessons from the leak

Regardless of the circumstances, there are key takeaways from the incident. First and foremost, the unauthorized disclosure of source code and other sensitive intellectual property could potentially be a boon for those seeking to steal corporate secrets. “Intel’s technology is almost ubiquitous, and the leaked device designs and firmware source code can put businesses and individuals at risk,” said Ilia Sotnikov, VP of product management at Netwrix. “Hackers and Intel’s own security research team are probably racing now to identify flaws in the leaked source code that can be exploited. Companies should take steps to identify what technology may be impacted and stay tuned for advisory and hotfix announcements from Intel.” “While we often think of data breaches in the context of customer data lost and potential PII leakage, it is very important that we also consider the value of intellectual property, especially for very innovative organizations and organizations with a large market share,” said Erich Kron, security awareness advocate at KnowBe4. This intellectual property can be very valuable to potential competitors, and even nation states, who often hope to capitalize on the research and development done by others.”


Researchers Trick Facial-Recognition Systems

The model then continuously created and tested fake images of the two individuals by blending the facial features of both subjects. Over hundreds of training loops, the machine-learning model eventually got to a point where it was generating images that looked like a valid passport photo of one of the individuals: even as the facial recognition system identified the photo as the other person. Povolny says the passport-verification system attack scenario — though not the primary focus of the research — is theoretically possible to carry out. Because digital passport photos are now accepted, an attacker can produce a fake image of an accomplice, submit a passport application, and have the image saved in the passport database. So if a live photo of the attacker later gets taken at an airport — at an automated passport-verification kiosk, for instance — the image would be identified as that of the accomplice. "This does not require the attacker to have any access at all to the passport system; simply that the passport-system database contains the photo of the accomplice submitted when they apply for the passport," he says.


The problems AI has today go back centuries

The ties between algorithmic discrimination and colonial racism are perhaps the most obvious: algorithms built to automate procedures and trained on data within a racially unjust society end up replicating those racist outcomes in their results. But much of the scholarship on this type of harm from AI focuses on examples in the US. Examining it in the context of coloniality allows for a global perspective: America isn’t the only place with social inequities. “There are always groups that are identified and subjected,” Isaac says. The phenomenon of ghost work, the invisible data labor required to support AI innovation, neatly extends the historical economic relationship between colonizer and colonized. Many former US and UK colonies—the Philippines, Kenya, and India—have become ghost-working hubs for US and UK companies. The countries’ cheap, English-speaking labor forces, which make them a natural fit for data work, exist because of their colonial histories. AI systems are sometimes tried out on more vulnerable groups before being implemented for “real” users. Cambridge Analytica, for example, beta-tested its algorithms on the 2015 


The State of AI-Driven Digital Transformation

Governments are transforming service delivery through AI as well. In China, a number of AI pilot programmes are rolling out across the court system, including an “AI robot” that can answer legal questions in real time, tools to automate evidence analysis and the automated transcribing of court proceedings that would remove the need for judicial clerks to double as stenographers. These technological developments point to a future in which routine court procedures are mostly handled by machines, so that judges can reserve their attention for more complex and demanding cases. The other major use of AI would be in the areas of security and data privacy. In fact, the Forrester study found that 61 percent of firms in APAC are already enhancing or implementing their data privacy and security-related capabilities using AI. For example, financial services giant AXA IT has been leveraging machine learning and AI to thwart online security threats. They’ve partnered with cybersecurity firm Darktrace whose Enterprise Immune System learns how normal users behave so as to detect dangerous anomalies with the help of AI. Data lie at the heart of AI. The success of AI-driven digital transformation, therefore, relies greatly on the ability to draw insights from big data. 


How to Keep APIs Secure From Bot Attacks

Many APIs do not check authentication status when the request comes from a genuine user. Attackers exploit such flaws in different ways, such as session hijacking and account aggregation, to imitate genuine API calls. Attackers also reverse engineer mobile applications to discover how APIs are invoked. If API keys are embedded into the application, an API breach may occur. API keys should not be used for user authentication. Cybercriminals also perform credential stuffing attacks to takeover user accounts. ... Many APIs lack robust encryption between the API client and server. Attackers exploit vulnerabilities through man-in-the-middle attacks. Attackers intercept unencrypted or poorly protected API transactions to steal sensitive information or alter transaction data. Also, the ubiquitous use of mobile devices, cloud systems and microservice patterns further complicate API security because multiple gateways are now involved in facilitating interoperability among diverse web applications. The encryption of data flowing through all these channels is paramount. ... APIs are vulnerable to business logic abuse. This is exactly why a dedicated bot management solution is required and why applying detection heuristics that are good for both web and mobile apps can generate many errors — false positives and false negatives.


Blazor vs Angular

Blazor is also a framework that enables you to build client web applications that run in the browser, but using C# instead of TypeScript. When you create a new Blazor app it arrives with a few carefully selected packages (the essentials needed to make everything work) and you can install additional packages using NuGet. From here, you build your app as a series of components, using the Razor markup language, with your UI logic written using C#. The browser can't run C# code directly, so just like the Angular AOT approach you'll lean on the C# compiler to compile your C# and Razor code into a series of .dll files. To publish your app, you can use dot net's built-in publish command, which bundles up your application into a number of files (HTML, CSS, JavaScript and DLLs), which can then be published to any web server that can serve static files. When a user accesses your Blazor WASM application, a Blazor JavaScript file takes over, which downloads the .NET runtime, your application and its dependencies before running your app using WebAssembly. Blazor then takes care of updating the DOM, rendering elements and forwarding events (such as button clicks) to your application code.


AI company pivots to helping people who lost their job find a new source of health insurance

In addition to making health insurance somewhat easier to get, the Affordable Care Act funded navigators who helped individuals choose the right insurance plan. The Trump administration cut funding for the navigators from $63 million in 2016 to $10 million in 2018. During the 2019 open enrollment period for the federal ACA health insurance marketplace, overall enrollment dropped by 306,000 people. "While that may not seem like a lot, the average annual medical expense is around $3,000 per person, and a shortfall of covered patients could represent over $900,000,000 of medical expenses will not be paid by health insurance," Showalter said. When states banned elective medical procedures temporarily during the early months of the pandemic, this cut off an important revenue stream for hospitals and many laid off workers. Some of these layoffs included patient navigators who helped patients enroll in health insurance, particularly Medicaid.  Showalter said that all Jvion customers have had at least a few navigators on staff but not enough to reach every patient in need of assistance.



Quote for the day:

"A good general not only sees the way to victory; he also knows when victory is impossible." -- Polybius