Daily Tech Digest - April 12, 2022

What Data Privacy Really Needs Now Is A Digital Transformation

To begin your company's data privacy digital transformation, you should do two main things. First, define your company's privacy requirements. Create a clear list of the current needs you have. Do you need help managing and fulfilling users' privacy requests? Do you need a consent management tool? Do you want to automate your data mapping efforts? Do you need third-party risk assessment? Make sure you clearly define your desired set of requirements based on your user base size, business assets and countries of operation. Depending on where your business and customers reside, you will need to research the requirements for data privacy compliance in each of those countries. ... A digital transformation will help the data privacy field make strides as it progresses. With privacy technology and automation, companies can seamlessly integrate data privacy into their businesses, products and customer experiences. Data ownership marks a new era in the digital world, and to make it possible and successful, we have to welcome this change with smart technologies and an open mind.


Introduction to BigLake tables

BigLake is a unified storage engine that simplifies data access for data warehouses and lakes by providing uniform fine-grained access control across multi-cloud storage and open formats. BigLake extends BigQuery's fine-grained row- and column-level security to tables on data resident object stores such as Amazon S3, Azure Data Lake Storage Gen2, and Google Cloud Storage. BigLake decouples access to the table from the underlying cloud storage data through access delegation. This feature helps you to securely grant row- and column-level access to users and pipelines in your organization without providing them full access to the table. After you create a BigLake table, you can query it like other BigQuery tables. BigQuery enforces row- and column-level access controls, and every user sees only the slice of data that they are authorized to see. Governance policies are enforced on all access to the data through BigQuery APIs. For example, the BigQuery Storage API lets users access authorized data using open source query engines such as Apache Spark ... For data administrators, BigLake lets you abstract access management on data lakes from files to tables, and it helps you manage users' access to data on lakes.


Creating a Security Culture Where People Can Admit Mistakes

The serious lesson from that is to acknowledge but forgive errors. "He's said, many times, that he knew at that moment it was going to be OK," Ellis says. "Creating a safe culture requires a lot of practices, and one of them is closure. Humor is a great way to provide closure because you rarely laugh about something that is still creating tension." There isn't a lot to laugh about in cybersecurity, with security teams fighting off a growing number of cyberattacks and deploying protective measures for a fast-evolving environment. But security shouldn't be about browbeating people into doing the right thing or scaring them with the prospect of punishment. For security to be a team sport, you need to make people want to play. It's vitally important to your business to create a security culture — that is, an atmosphere in which someone who messes up and breaks something feels they can report it without getting blasted for their actions. This idea isn't new, but considering recent analysis about how some companies aren't backing up their source code, sometimes stories need to be repeated.


OpenSSH goes Post-Quantum, switches to qubit-busting crypto by default

The discrepancy in effort between multiplying two known primes together, and splitting that product back into its two factors, is pretty much the computational basis of a lot of modern online security…so if quantum computers ever do become both reliable and powerful enough to work their superpositional algorithmic magic on 3072-digit prime factors, then breaking into messages we currently consider uncrackable in practice may become possible in theory. Even if you’d have to be a nation state to have even the tiniest chance of succeeding, you’d have turned a feat that everyone once considered computationally unfeasible into a task might just be worth having a crack at. This would undermine a lot of existing public-key crypto algorithms to the point that they simply couldn’t be trusted. Even worse, quantum computers that could crack new problems might also be used to have a go at older cryptographic puzzles, so that data we’d banked on keeping encrypted for at least N years because of its high value might suddenly be decryptable in just M years, where M < N, perhaps less by an annoyingly large amount.


7 tips for leading productive remote teams

“Managing productivity is one of the most complex things any one person or organization can aspire to do,” says Dr. Sahar Yousef, a cognitive neuroscientist at University of California—Berkeley. The first step, though, is to define what you mean by productive, she says. “You can’t improve or change something that is not measurable.” And you can’t trust your team if you can’t also verify that they are working productively. If, in the past, you measured how hard people were working by noting who was at their desk or who spoke up in meetings, you’ll have to find a new way. Those things aren’t available anymore and they were never a good measure of productivity anyway. “We measure baselines around productivity, not hours worked,” says Andi Mann, CTO at Qumu. Because tracking how many hours someone worked doesn’t tell you much about productivity, even when you could tell the difference between work and home. “I spent nine hours at work,” says Mann. “Does that mean I accomplished something? Not necessarily. So that’s not the measure I’m looking for. My team are grownups — coders, engineers, smart people. I measure metrics that matter — outputs and accomplishments.”


Expanding Devops With Infrastructure As Code

Given the need to software companies to constantly grow their customer bases, the relative low cost of cash for the past decade and a half, and the ability to cross sell and upsell, it is natural for software conglomerations to form. And so it was only a matter of time before Puppet Software and its peers, Ansible, Chef and SaltStack, were acquired once they built up sufficient momentum to demonstrate their likely longevity across service providers, smaller clouds, and enterprises that do not build their own DevOps software stacks. So Red Hat bought Ansible in October 2015 for around $100 million, and Ansible was absolutely one of the reasons why IBM was compelled to pay $34 billion to acquire Red Hat in October 2018. ... And then VMware paid an undisclosed sum to buy SaltStack in that same month. HashiCorp, which has built a big following with its Terraform and Vagrant configuration management tools, has gone all the way and built a complete DevOpsContainer platform and has also gone public – but HashiCorp is the exception, not the rule, and it will have to keep expanding its platform and adding more tools if it hopes to keep growing its business.


3 Ways Developers Can Boost Cloud Native Security

Developers’ interest in security has been a long time coming. Google search data shows that queries for terms like “what is DevSecOps” and “DevSecOps vs. DevOps” first popped up in 2014 and have been steadily rising since 2017. The cloud, microservices, containerization and APIs are responsible for this burgeoning interest. These innovative technologies aren’t only changing the way applications are built and operated, they’re also changing what’s needed from a security perspective. In a modern environment, developers, engineers and architects need to think about data privacy and security because today’s applications benefit from having security measures baked into discrete components. Before the cloud became as ubiquitous as it is today, traditional cybersecurity relied on a perimeter-based model. Measures like firewalls and browser isolation systems essentially “surrounded” on-premise networks and systems. Applications and data were secure because they were hosted on physically isolated infrastructure. 


Data democratization leaves enterprises at risk

Data democratization strategies ensure that company data is easily accessible by all employees, regardless of their position, without the involvement of the IT department. As valuable company data is placed in the hands of more individuals, cybercriminals can broaden the scope of potential targets to hack. Now an entire organization’s employee population theoretically faces an increased risk of malware penetration, and IT departments have a more difficult time deciphering when an unauthorized user has infiltrated the cloud-based systems where the data lives. Many organizations have implemented traditional detection-based security technology to thwart these threats, yet these solutions are only able to detect threats with known malware signatures. As enterprises work to secure their cloud infrastructures, they need to consider that solutions that focus on detecting threats are unable to protect against sophisticated attacks. As mentioned, proper security is critical for data democratization. Yet, in order for data democratization to work and make an impact, productivity has to be a critical focus.


How CISOs Are Walking the Executive Tightrop

High-performing CISOs are taking strategic business objectives and efforts into account and adapting their security programs to deliver results that multiply business velocity and revenue, instead of hindering the business by basing a security program on threats and vulnerabilities alone. This means CISOs are also having to become more business-savvy, helping promote a security culture through shared values, trust, and accountability, often more through influencing skills than with the security and compliance hammer. “We're seeing the CISO role being elevated out from underneath the CIO's IT umbrella and becoming a direct report to the CEO,” explains John Hellickson, field CISO executive advisor for Coalfire. “This means they are expected to bring a high degree of business acumen in how they represent risk to their business peers and stakeholders.” He said the need for establishing business-aligned cybersecurity programs that go beyond typical control frameworks is now table stakes -- the ability to demonstrate positive business outcomes and ROI of security risk management activities and investments will continue to be expected in the years to come.


Patch Tuesday to End; Microsoft Announces Windows Autopatch

"A security gap forms when quality updates that protect against new threats aren't adopted in a timely fashion. A productivity gap forms when feature updates that enhance users' ability to create and collaborate aren't rolled out. As gaps widen, it can require more effort to catch up," Bela says. In a separately released Windows Autopatch FAQ, Microsoft says the updates will be applied to a small initial set of devices, evaluated and then graduated to increasingly larger sets, with an evaluation period at each progression. "This process is dependent on customer testing and verification of all updates during these rollout stages. The outcome is to assure that registered devices are always up to date and disruption to business operations is minimized, which will free an IT department from that ongoing task," Microsoft says. In addition, Microsoft says that in case of an issue, the Autopatch service can be paused by the customer or the service itself. "When applicable, a rollback will be applied or made available," it says.



Quote for the day:

"The secret of a leader lies in the tests he has faced over the whole course of his life and the habit of action he develops in meeting those tests." -- Gail Sheehy

Daily Tech Digest - April 11, 2022

So you want to change cloud providers

Cloud has never really been about saving money. It’s about maximizing flexibility and productivity. As one HN commenter points out, “I work on a very small team. We have a few developers who double as ops. None of us are or want to be sysadmins. For our case, Amazon’s ECS [Elastic Container Service] is a massive time and money saver.” How? By removing sysadmin functions the team previously had to fill. “Yes, most of the problems we had before could have been solved by a competent sysadmin, but that’s precisely the point—hiring a good sysadmin is way more expensive for us than paying a bit extra to Amazon and just telling them ‘please run these containers with this config.’ ” He’s doing cloud right. Others suggest that by moving to serverless options, they further reduce the need for sysadmins. Yes, the more you dig into services that are unique to a particular cloud, the less easy it is to migrate, no matter how many credits a provider throws at you. But, arguably, the less desire you’d have to migrate if your developers are significantly more productive because they’re not reinventing infrastructure wheels all the time.


How Not to Do Digital Transformation (Hint: It’s How Not to Do Data Integration, Too)

Digital transformation poses a set of difficult data management problems. How do you integrate data that originates in separate, sometimes geographically far-flung locations? Or, more precisely, how do you integrate data that is widely distributed in geophysical and virtual space in a timely manner? This last is one of the most misunderstood problems of digital transformation. Software vendors, cloud providers, and, not least, IT research firms talk a lot about digital transformation. Much of what they say can safely be ignored. In an essential sense, however, digital transformation involves knitting together jagged or disconnected business workflows and processes. It entails digitizing IT and business services, eliminating the metaphorical holes, analog and otherwise, that disrupt their delivery. It is likewise a function of cadence and flow: i.e., of ensuring that the digital workflows which underpin core IT and business services function smoothly and predictably; that processes do not stretch – grind to a halt as they wait for data to be made available or for work to be completed – or contract, i.e., that steps in a workflow are not skipped if resources are unavailable.


The Internet of Things in Solutions Architecture

Industrial customers seek to gain insights into their industrial data and achieve outcomes such as lower energy costs, detecting and fixing equipment issues, spotting inefficiencies in manufacturing lines, improving product quality, and improving production output. These customers are looking for visibility into operational technology (OT) data from machines and product life cycles (PLCs) systems for performing root cause analysis (RCA) when a production line or a machine goes down. Furthermore, IoT improves production throughput without compromising product quality by understanding micro-stoppages of machinery in real time. Data collection and organization across multiple sources, sites, or factories are challenging to build and maintain. Organizations need a consistent representation of all their assets that can be easily shared with users and used to build applications, at a plant, across plants, and at a company level. Data collected and organized using on-premises servers is isolated to one plant. Most data collected on-premises is never analyzed and thrown away due to a lack of open and accessible data.


10 NFT and cryptocurrency security risks that CISOs must navigate

When someone buys an NFT, they aren't actually buying an image, because storing photos in the blockchain is impractical due to their size. Instead, what users acquire is some sort of a receipt that points them to that image. The blockchain only stores the image's identification, which can be a hash or a URL. The HTTP protocol is often used, but a decentralized alternative to that is the Interplanetary File System (IPFS). Organizations who opt for IPFS need to understand that the IPFS node will be run by the company that sells the NFT, and if that company decides to close shop, users can lose access to the image the NFT points to. ... A blockchain bridge, sometimes called cross-chain bridge, does just that. "Due to their nature, usually they are not implemented strictly using smart contracts and rely on off-chain components that initiate the transaction on the other chain when a user deposits assets on the original chain," Prisacaru says. Some of the biggest cryptocurrency hacks involve cross-chain bridges, including Ronin, Poly Network, Wormhole.


How to achieve better cybersecurity assurances and improve cyber hygiene

Don’t believe that network engineers are immune to misconfiguring devices (including firewalls, switches, and routers) when making network changes to meet operational requirements. Human error creates some of the most significant security risks. It’s typically not the result of malicious intent – just an oversight. Technicians can inadvertently misconfigure devices and, as a result, they fall out of compliance with network policy, creating vulnerabilities. If not monitored closely, configuration drift can result in significant business risk. ... Network segmentation is a robust security measure often underutilized by network security teams. In the current threat landscape with increasingly sophisticated attacks, the successful prevention of network breaches cannot be guaranteed. However, a network segmentation strategy, when implemented correctly, can mitigate those risks by effectively isolating attacks to minimize harm. With a well-planned segmented network, it is easier for teams to monitor the network, identify threats quickly and isolate incidents. 


‘It Depends’ — Kubernetes Excuse or Lack of Actionable Data?

Finding the right answer more quickly needs to be easier. It needs to require fewer cycles and get us to a higher degree of confidence in our answer. If we have an assured way to get there, we’re more likely to lean in — “It depends” and “I have a way to get the answer we need to make an informed decision.” Sure, there’s the initial discomfort (and stomach lurch) of the inverted loop, but there’s also a way to come out of it with a positive experience while you go along for the ride. The question then becomes how? In the past, the only way to identify all of the variables, understand the dependencies and their impact, and then make an informed decision was to approach it manually. We could do that by observation or experimentation, two approaches to learning that have their place in the application optimization process. But let’s be honest, a manual approach is just not viable, both in terms of the resources needed and the high level of confidence needed in the results — not to mention the lack of speed. Fortunately, today, machine learning and automation can help.


How to Maximize Your Organization's Cloud Budget

To optimize a cloud budget, start with the smallest possible allowable instance that's capable of running an application or service, recommends Michael Norring, CEO of engineering consulting firm GCSIT. “As demand increases, horizontally scale the application by deploying new instances either manually or with auto-scaling, if possible.” Since cloud service costs increase exponentially the larger the size of the service, it's generally cheaper and more affordable to use small instances. “This is why when deploying services, it's better to start with a fresh install, versus lifting-and-shifting the application or service with all its years of cruft,” he says. ... Many enterprises already use multiple clouds from various providers, observes Bernie Hoecker, partner and enterprise cloud transformation lead with technology research and advisory firm ISG. He notes that adopting a multi-cloud estate is an effective strategy that allows an organization to select providers on the basis of optimizing specific applications. Enterprises also turn to multiple clouds as a mechanism to deal with resiliency and disaster recovery, or as a hedge to prevent vendor lock-in. “A multi-cloud estate makes IT management and governance complex,” Hoecker observes.


Cybersecurity is IT’s Job, not the Board’s, Right?

Directors should prepare ahead of time to prevent the effects of cyberattacks and mitigate the risk of personal liability. Broadly speaking, boards must implement a reporting system and monitor or oversee the operation of that system to prevent personal liability under Caremark. In re Caremark Int’l Inc. Derivative Litig., 698 A.2d 959, 970 (Del. Ch. 1996). In Caremark, shareholders filed a derivative suit against the board after the company was required to pay approximately $250 million for violations of federal and state health care laws and regulations. Id. at 960–61. The Delaware Chancery Court held that directors can be held personally liable for failing to “appropriately monitor and supervise the enterprise.” Id. at 961. The court emphasized that the board must make a good faith effort to implement an adequate information and reporting system and that the failure to do so can constitute an “unconsidered failure of the board to act in circumstances in which due attention would, arguably, have prevented the loss.” Id. at 967. While Caremark did not address cybersecurity directly, the court’s reasoning in Caremark is applicable to board involvement, or lack thereof, with cybersecurity.


5 Types of Cybersecurity Skills That IT Engineers Need

Since IT engineers are typically the people who configure cloud environments, understanding these risks, and how to manage them, is a critical cybersecurity skill for anyone who works in IT. This is why IT operations teams should learn the ins and outs of cloud security posture management, or CSPM, the discipline of tools and processes designed to help mitigate configuration mistakes that could invite security breaches. They should also understand cloud infrastructure entitlement management, which complements CSPM by detecting types of risks that CSPM alone can't handle. ... Even well-designed networks that resist intrusion can be vulnerable to distributed denial of service, or DDoS, attacks, which aim to take workloads offline by overwhelming them with illegitimate network requests. To keep workloads operating reliably then, IT operations engineers should have at least a working knowledge of anti-DDoS techniques and tools. Typically, anti-DDoS strategies boil down to deploying services that can filter and block hosts that may be trying to launch a DDoS attack. 


EncroChat: France says ‘defence secrecy’ in police surveillance operations is constitutional

France’s Constitutional Council, which includes former prime ministers Laurent Fabius and Alain Juppé among its members, heard arguments on 29 March over whether the EncroChat and Sky ECC hacking operations were compatible with the right to a fair trial and the right to privacy guaranteed under the French constitution. At issue is a clause in the criminal code that allows prosecutors or magistrates to invoke “national defence secrecy” to prevent the disclosure of information about police surveillance operations that defence lawyers argue is necessary for defendants to receive a fair trial. French investigators used article 707-102-1 of the criminal code – described as a “legal bridge” between French police and the secret services – to ask France’s security service, DGSI, to carry out surveillance operations on two encrypted phone systems, EncroChat and Sky ECC. Patrice Spinosi, lawyer at the Council of State and the Supreme Court, representing the Association of Criminal Lawyers and the League of Human Rights, said the secret services hacking operation had struck a gold mine of information.



Quote for the day:

"Successful leaders see the opportunities in every difficulty rather than the difficulty in every opportunity" -- Reed Markham

Daily Tech Digest - April 10, 2022

Robots Developing The Unique Sixth Sense

In the sense of smell and taste, robots with chemical sensors could be far more precise than humans, but building in proprioception, the robot’s awareness of itself and its body, is far more challenging and is a big reason why humanoid robots are so tough to get right. Tiny modifications can make a big difference in human-robot interaction, wearable robotics, and sensitive applications like surgery. In the case of hard robotics, this is usually solved by putting a number of strain and pressure sensors in each joint, which allow the robot to figure out where its limbs are. This is fine for rigid robots with a limited number of joints, but it is insufficient for softer, more flexible robots. Roboticists are torn between having a large, complicated array of sensors for every degree of freedom in a robot’s mobility and having limited proprioception skills. This challenge is being addressed with new solutions, which often involve new arrays of sensory material and machine-learning algorithms to fill in the gaps. They discuss the use of soft sensors spread at random through a robotic finger in a recent study in Science Robotics.


The Rise of Enterprise Data Inflation

Data inflation ensues when spending on data rises without deriving proportional enterprise value from that spending. Surprisingly, digital transformation and application modernization have created fertile ground for data inflation to run rampant. As enterprises refactor applications and ever-expanding datasets aren’t managed carefully, enterprises experience data sprawl. Moving to the cloud to deliver more capability and use can inadvertently lead to data inflation. Often, a dataset is helpful across multiple areas of a business. Different development groups or people with unrelated objectives might make numerous copies of the same data. They often change a dataset’s taxonomy or ontology for their software or business processes, making it harder for others to identify it as a duplicate. This occurs because the average data scientist trying to hone in on a particular data insight has different priorities than the data engineers responsible for pipelining that data and creating new features. And the typical IT person has little visibility into the use of the data at all. The result is that the enterprise pays for many extra copies without getting any new value – a core driver of data inflation.


Will Apple build its own blockchain?

One thing that is pretty clear is that if Apple creates a specific carve-out for NFTs in its own App Store rules, it’s going to be on its own terms. They could take a number of different paths; I could see a world where Apple could only allow certain assets on certain blockchains or even build out their own blockchain. But Apple’s path toward controlling the user experience will most likely rely on Apple taking a direct hand in crafting their own smart contracts for NFTs, which developers might be forced to use in order to stay compliant with App Store rules. This could easily be justified as an effort to ensure that consumers have a consistent experience and can trust NFT platforms on the App Store. These smart contracts could send Apple royalties automatically and lead to a new in-app payment fee pipeline, one that could even persist in transactions that took place outside of the Apple ecosystem(!). More complex functionality could be baked in as well, allowing Apple to handle workflows like reversing transactions. Needless to say, any of these moves would be highly controversial among existing developers.


A Microservice Overdose: When Engineering Trends Meet the Startup Reality

Microservices are not the only big engineering trend that is happening right now. Another big trend that naturally comes together with microservices, is using a multi-repo version control approach. The multi-repo strategy enables the microservice team to maintain a separate and isolated repository for each responsibility area. As a result, one group may own a codebase end to end, developing and deploying features autonomously. Multi-repo seems like a great idea, until you realize that code duplication and configuration duplication are still not solved. Apart from the code duplication that we already discussed, there is a whole new area of repository configurations – access, permissions, branch protection, and so on. Such duplications are expected with a multi-repo strategy because multi-repo encourages a segmented culture. Each team does its own thing, making it challenging to prevent groups from solving the same problem repeatedly. In theory, a better alternative could be the mono-repo approach. In a mono-repo approach, all services and codebase are kept in a single repository. But in practice, mono-repo is fantastic if you’re Google / Twitter / Facebook. Otherwise, it doesn’t scale very well.


Talking Ethical AI with SuperBot’s Sarvagya Mishra

AI is the most transformative technology of our era. But it brings to the fore some fundamental issues as well. One, a rapidly expanding and pervasive technology powered by mass data, may bring about a revolutionary change in society; two, the nature of AI is to process voluminous raw information which can be used to automate decisions at scale; three, all of this is happening while the technology is still in the nascent stage. If we think about it, AI is a technology that can impact our lives in multiple ways – from being the backbone of devices that we use to how our economies function and even how we live. AI algorithms are already deployed across every major industry for every major use case. Since AI algorithms are essentially sets of rules that can be used to make decisions and operate devices, they could make judgement calls that harm an individual or a larger population. For instance, consider the AI algorithm for a self-driving car. It’s trained to be cautious and follow traffic rules, but what happens if it suddenly decides that breaking the rules is more beneficial? It could lead to a lot of accidents. 


Data Science: How to Shift Toward More Transparency in Statistical Practice

A common misconception about statistics is that it can give us certainty. However, statistics only describe what is probable. Transparency can be best achieved by conveying the level of uncertainty. By quantifying research inferences about uncertainty, a greater degree of trust can be achieved. Some researchers have done studies of articles in physiology, the social sciences, and medicine. Their findings demonstrated that error bars, standard errors, and confidence intervals were not always presented in the research. In some cases, omitting these measures of uncertainty can have a dramatic impact on how the information is interpreted. Areas such as health care have stringent database compliance requirements to protect patient data. Patients could be further protected by including these measures, and researchers can convey their methodology and give readers insights into how to interpret their data. Assessing Data Preprocessing Choices Data scientists are often confronted with massive amounts of unorganized data. 


DAO regulation in Australia: Issues and solutions, Part 2

So, the role of the government is to introduce regulations and standards, to make sure that people understand that when they publish a record — say, on Ethereum — it will become immutable and protected by thousands of running nodes all around the globe. If you publish it on some private distributed ledger network controlled by a cartel, you basically need to rely on its goodwill. The conclusion for this part of the discussion is the following. With blockchain, you don’t need any external registry database, as blockchain is the registry, and there is no need for the government to maintain this infrastructure, as the blockchain network is self-sustainable. Users can publish and manage records on a blockchain without a registrar, and there must be standards that allow us to distinguish reliable blockchain systems. ... The difference is that this must be designed as a standard requirement for the development of a compliant DAO. Those who desire to work under the Australian jurisdiction must develop the code of their decentralized applications and smart contacts compliant with these standards.


Data Governance Adoption: Bob Seiner on How to Empower Your People to Participate

When you consider the ADKAR model for change, any program adoption requires personal activation. “You need to find a way to make that connection with people,” Bob says. “ADKAR relies on personal traits and things that people need to adjust to and adopt to further the way they’re able to govern and steward data in their organization. Make it personable, make it reasonable, and help them understand they play a big role in data governance.” But even the most energized workforce can’t participate in active data governance without the right tools — your drivers won’t win their race without cars, after all. Like most large organizations, Fifth Third has a very divided data platform ecosystem, with several dozen tools employing both old and new technology. But as their vice president of enterprise data, Greg Swygart, notes, where data consumption starts and ends — curation and interaction — “the first step in the data marketplace is always Alation.” “Implementing an effective data governance program really requires getting people involved,” Bob concludes. 


AI Regulatory Updates From Around the World

Under the proposed ‘Artificial Intelligence Act,' all AI systems in the EU would be categorized in terms of their risk to citizens' privacy, livelihoods, and rights. ‘Unacceptable risk' covers systems that are deemed to be a "clear threat to the safety, livelihoods, and rights of people.” Any product or system which falls under this category will be banned. This category includes AI systems or applications that manipulate human behavior to circumvent users' free will and systems that allow ‘social scoring' by governments. The next category, 'High-risk,' includes systems for critical infrastructure which could put life or health at risk, systems for law enforcement that may interfere with people's fundamental rights, and systems for migration, asylum-seeking, and border control management, such as verification of the authenticity of travel documents. AI systems deemed to be high-risk will be subject to “strict obligations” before they can be put on the market, including risk assessments, high quality of the datasets, ‘appropriate’ human oversight measures, and high levels of security.


SEC Breach Disclosure Rule Makes CISOs Assess Damage Sooner

The central question facing CISOs who've experienced a security incident will be around how materiality is determined. The easiest way to assess whether an incident is material is by looking at the impact to sales as a percentage of the company's overall revenue or by tracking how many days a company's systems or operations are down as the result of a ransomware attack, Borgia says. But the SEC has pressured companies to consider qualitative factors such as reputation and the centrality of a breach to the business, he says. For instance, Pearson paid the SEC $1 million to settle charges that it misled investors about a breach involving millions of student records. Though the breach might not have been financially material, he says it put into doubt Pearson's ability to keep student data safe. The impact of the proposed rule will largely come down it how much leeway the SEC provides breach victims in determining whether an incident is material. If the SEC goes after businesses for initially classifying an incident as immaterial and then changing their minds weeks or months later when new facts emerge, he says, companies will start putting out vague and generic disclosures that aren't helpful.



Quote for the day:

"Give whatever you are doing and whoever you are with the gift of your attention." -- Jim Rohn

Daily Tech Digest - April 09, 2022

Essentials of Enterprise Architecture Tool

EA tools allow organizations to map out their business process architecture, business capability architecture, application architecture, data architecture, integration architecture, and technology architecture. The common capabilities of EA Tool are, EA Repository supports business, information, technology, and solution viewpoints and their relationships and supports business direction, vision, strategy, etc EA Modelling, support the minimum viewpoints of business, information, solutions, and technology. Modeling of As-Is and Target state, Impact Analysis, and Roadmaps Decision Analysis, capabilities such as gap analysis, traceability, impact analysis, scenario planning, and system thinking. Multiple Views support multiple views for different types of audiences/users such as Executives, Architects/Designers, Business Planners, Suppliers, etc. Support customization and extensions of meta-model, diagrams, menus, matrices, and reports Collaboration and Sharing, provide good collaboration-oriented features, which include simultaneous model editing, a shared remote repository, version management including model comparison and merge, easy publishing, and review capabilities


Could Blockchain Be Sustainability’s Missing Link?

Environmental sustainability is only one use case for blockchain technology. Companies can use distributed ledgers for social sustainability and governance. For example, pharmaceutical companies can collect data on a blockchain that identifies and traces prescription drugs. This data collection can prevent consumers from falling prey to counterfeit, stolen, or harmful products. Banks can collateralize physical assets, such as land titles, on a blockchain to keep an unalterable record and protect consumers from fraud. In supply chain finance, organizations can use distributed ledger technology to match the downstream flow of goods with the upstream flow of payments and information. That can help level the playing field for smaller financial institutions. Sustainability must be seamless. ServiceNow recently partnered with Hedera to help organizations easily adopt digital ledger technology on the Now Platform. This partnership provides a seamless connection between trusted workflows across organizations.


Supply chain woes? Analytics may be the answer

Enterprises face multiple risks throughout their supply chains, Deloitte says, including shortened product life cycles and rapidly changing consumer preferences; increasing volatility and availability of resources; heightened regulatory enforcement and noncompliance penalties; and shifting economic landscapes with significant supplier consolidation. ... “Often people think of the supply chain as one thing and it is not,” Korba says. “We think of the supply chain as the sum of several parts of the whole business operation — from understanding customer demand to materials management and manufacturing or sourcing and purchasing, to logistics and transportation, to inventory management and automated replenishment orders at Optimas and at our customers’ locations.” A key to success is the ability for all the supply chain tools the company uses to work together seamlessly, to help keep customers appropriately stocked and better manage costs, demand, inventory, production, and suppliers. The information provided through analytics needs to address financial issues such as cashflow and pricing on the supply and demand sides.


Cloud 2.0: Serverless architecture and the next wave of enterprise offerings

Serverless architecture brings two benefits. First, it enables a pay-as-you-go model on the full stack of technology and on the most granular basis possible, thereby reducing the overall run cost. The pay-as-you-go model is activated by putting functions into production via the operator of the serverless ecosystem only when they are needed. Therefore, serverless architecture not only reduces costs below the economies of scale provided by cloud-based setups capable of operating infrastructure at large scale, but also reduces idle capacity. Second, serverless architecture provides ecosystem access for the underlying infrastructure as well as the entire functionality, thereby drastically reducing the cost to transform the company’s IT environment. Ecosystem access for functions is achieved through the provider’s FaaS and BaaS models instead of being redeveloped for every client. While ecosystem access in SaaS was only possible for the entire software package, with serverless architecture even small-scale functions can be reused, thereby offering more flexibility and reusability on a broad basis.


Meta wants to turn real life into a free-to-play

Companies adopting the free-to-play monetization techniques in their titles naturally have an incentive to max out the users’ shopping sprees. To this end, they can deploy a whole array of design decisions, from annoying pop-ups with links to in-game shops to more sophisticated tools. The latter use behavioral data and psychological tricks to goad the users into spending more. Some of the latest patents coming from leading industry names, such as Activision, put machine learning at the service of the company’s bottom line. Tweaking the matchmaking system to prompt new players to spend more? Check. Clustering players in groups to target them with tailored messaging, offerings, and prices? Check. These and other techniques live and breathe behavioral data. As such, they do raise red flags in terms of data exploitation, especially if you consider who tends to fall for them the hardest. Free-to-play games make a solid chunk of their revenues off a very small subset of their player base, the so-called “whales,” as high-paying players are known in the industry.


Managing Complex Dependencies with Distributed Architecture at eBay

The eBay engineering team recently outlined how they came up with a scalable release system. The release solution leverages distributed architecture to release more than 3,000 dependent libraries in about two hours. The team is using Jenkins to perform the release in combination with Groovy scripts. As we learnt from Randy Shoup (VP of engineering and chief architect at eBay) and Mark Weinberg (VP, core product engineering at eBay) had systemic challenges with releasing major dependencies, leading to the equivalent of distributed monoliths. Late last year, eBay began migrating their legacy libraries to a Mavenized source code. The engineering team needed to consider the complicated dependency relationships between the libraries before the release. The prerequisite of one library release is that all the dependencies of it must have been released already, but considering the large number of candidate libraries and the complicated dependency relationships in each other, it will cause a considerable impact on release performance if the libraries release sequence cannot be orchestrated well.


Mark Zuckerberg’s vision for the metaverse is off to an abysmal start

While Meta’s promotional vision for metaverse worlds is a series of distinct snapshots, other metaverse platforms, such as Decentraland, The Sandbox, and Cryptovoxels, feature some level of urban planning. Like in many real-world cities, they use a grid system with plots of land distributed on a horizontal plane. This allows for property to be easily parceled and sold. However, many of these plots have remained empty, demonstrating that they are primarily traded speculatively. In some instances, content—buildings and things to do, see, and buy within them—has been added to plots of land, in an effort to create value. Virtual property developer the Metaverse Group is leasing Decentraland parcels and offering in-house architectural services to tenants. Its parent company, Tokens.com, has virtual headquarters there too, a blocky sci-fi-style tower in an area called Crypto Valley. ... Real cities are now choosing to emulate themselves in the metaverse. South Korea’s Metaverse 120 Centre will provide both recreational and administrative public services. 


SARB notes benefits, risks in using distributed ledger technology

One of the primary risks stems from the lack of regulatory certainty as the existing legal and regulatory frameworks for financial markets were not designed for trading, clearing or settling on DLT, he added. Innovation should be done in a way that the financial system is taken forward to benefit society as a whole, including contributing to achieving objectives such as improving efficiency, lowering barriers to entry for financial activity and addressing any challenges restricting access to meaningful financial services. ... “PK2 has demonstrated that building a platform for a tokenised security would impact on the existing participants in the financial market ecosystem, as several functions currently being performed by separately licensed market infrastructures could be carried out on a single shared platform. ... Further, the report, produced in partnership with the Intergovernmental Fintech Working Group and financial industry participants, highlights several legal, regulatory and policy implications that need to be carefully considered in the application of DLT to financial markets.


Why There is No Digital Future Without Blockchain

In web3, new storage solutions allow people to store data for each other in a secure and decentralized way. This makes it much, much, more difficult to obtain user data through hacking a server full of data. At the same time, the way data will be managed on the user-side is that it will be completely permission-based. Users will be able to manage data access on the fly, giving and withdrawing permission to personal data when needed. In our vision, this will end up being the way the internet is going to work in the future, whether you apply for a loan or do an online personality test. ... The power of blockchain here lies in the power of digital sovereignty, in other words, the freedom to do whatever you want online without anybody telling you otherwise. Here again, the decentralized nature of blockchain is key, because it makes it virtually impossible for any third party to interfere with the process. ... The idea is that the decentralized nature of blockchain allows people to transact wealth freely, without the need for banks, governments, or anybody else. This once sounded like a futuristic libertarian utopia, now it’s becoming a reality.


How to Measure Agile Maturity

Delivering successful products is essential and goes hand in hand with knowing how good we are at creating the product: our performance. I suggest resisting the urge to measure our performance as a cost. There are many useful metrics available such as speed, quality, predictability, etc that monitor our performance. A word of caution is needed to decide which metrics are valuable and which are not. For example, Velocity is not suitable to compare team performance. Although it can be a valuable metric at a team level, intended for the team to monitor its own speed. However, velocity does not add up to give you a number on your organisational speed. Some suggestions for useful metrics: cycle time, release frequency, product index, innovation rate, etc. ... Measuring how well we perform in delivering value to the customer also serves as a metric for organisational change. How? If it takes multiple sprints and 16 hand-offs to ship an integrated product, we can monitor how we are doing in trying to deliver that integrated product without hand-offs in a single sprint. If the number of handoffs of a team goes down, their ability to deliver Done goes up, which is a metric of organisational improvement.



Quote for the day:

"Leaders must encourage their organizations to dance to forms of music yet to be heard." -- Warren G. Bennis

Daily Tech Digest - April 08, 2022

Why Literate Programming Might Help You Write Better Code

Literate programming is an approach to programming in which the code is explained using natural language alongside the source code. This is distinct from related practices such as documentation or code comments; there, the code is primary, with commentary and explanation being secondary. In literate programming, however, explanation has equal billing with the code itself. “Documentation is fundamentally disconnected from the code,” Franusic noted. Often, “documentation is written by someone who doesn’t work on the code. This distance between code and documentation makes it harder to really understand what the code is doing.” This underlines what makes literate programming particularly valuable: it’s a means of gaining greater transparency or clarity over code. Having been developed in the early ‘80s by Donald Knuth, a computer scientist now professor emeritus at Stanford University, it would be easy to dismiss literate programming as a relic of a much earlier era of computing.


FBI Cybersecurity Strike Against Russian Botnet Is ‘Awesome Moment’ For MSPs

The FBI operation marks the beginning of a new era in the continuing battle MSPs are waging to protect SMBs and themselves from all kinds of attacks, including nation-state attacks, said Stinner. “Big businesses have invested heavily in cybersecurity, and their defenses are high,” he said. “They are harder to attack. This was an attempt by Russia to inflict maximum chaos in the United States economy by taking down small businesses. This could potentially have impacted millions of small businesses. The Russian government was looking to take down Main Street, and they targeted WatchGuard devices. If Russia was successful, this could have caused mass pandemonium.” Michael Goldstein, president and CEO of Fort Lauderdale, Fla.-based MSP LAN Infotech, applauded the FBI for working closely with WatchGuard to take “action” to prevent what could have been a devastating attack. “It looks like the firewalls were there, [and they were] planting malware that were botnets that were going out and reporting back [to the hackers],” he said.


Is Crypto Re-Creating the 2008 Financial Crisis?

I’ve definitely heard that a selling point of DeFi is that it gets rid of the need for bailouts. And yes: I’ve had people accuse me on this point of shilling for big banks, and it’s just not true. If you’re asking me to choose, I’d absolutely rather see a bailout that prevents broader, sustained economic chaos than not. And the reason for that isn’t because I care about protecting executives at banks. In all my work, I’m speaking for the people downwind of all of this. The already vulnerable people who end up being hurt the most by financial collapse. ... Complexity is weaponized in some of these instances to deflect scrutiny. This is an old trick from the financial industry: Make things more complex. In DeFi, you have financial complexity overlaid with technical complexity, too—so there is, really, just the thinnest subset of people who can do both. And those people will be paid a LOT of money to participate and build these tools. And when the slice of people is so small and they’re so handsomely rewarded, there’s not going to be many savvy watchdogs—there’s less incentive to be a policeman on the beat. It’s much easier to just go work on a project.


How To Get Started With IoT Device Security

An organization’s first step is to know the locations of all its intelligent devices. That’s harder to do than it might seem. These devices are commonly installed by one user or department without coordination of the rest of the organization. The move to remote work has exacerbated the problem at the edge, with organizations lacking visibility into the devices used by remote employees. To locate intelligent devices, an organization must map the IoT security architecture. In doing so, the organization should have a clear view of how each device interacts with the application and technology stack. Additionally, the organization must understand who in the organization is responsible for updating and managing devices. Having a full list of the devices is also important. Traditionally, companies use network device monitoring or asset management and monitoring software. That’s a good start, but using IoT-specific tools can be more accurate. These include IoT asset management software and network sensors. IoT security platform vendors include Ordr, Tele2, BeWhere, and Particle.


Comparing Go vs. C in embedded applications

Compiled Go code is generally slower than C executables. Go is fully garbage collected and this itself slows things down. With C, you can decide precisely where you want to allocate memory for the variables and whether that is on the stack or on the heap. With Go, the compiler tries to make an educated decision on where to allocate the variables. You can see where the variables will be allocated (go build -gcflags -m), but you cannot force the compiler to use only the stack, for example. However, when it comes to speed we can not forget about compilation speed and developer speed. Go provides extremely fast compilation; for example, 15,000 lines of Go client code takes 1.4 seconds to compile. Go is very well designed for concurrent execution (goroutines and channels) and the aforementioned rich standard library covers most of the basic needs, so development is faster. ... There are two Go compilers you can use: the original one is called gc. It is part of the default installation and is written and maintained by Google. The second is called gccgo and is a frontend for GCC. With gccgo, compilation is extremely fast and large modules can be compiled within seconds. 


Transformers for software engineers

This post is an attempt to present the Transformer architecture in a way that highlights some of the perspectives and intuitions that view affords. We’ll walk through a (mostly) complete implementation of a GPT-style Transformer, but the goal will not be running code; instead, I’m going to use the language of software engineering and programming to explain how these models work and articulate some of the perspectives we bring to them when doing interpretability work. ... At the highest level, an autoregressive language model (including the decoder-only Transformer) will take in a sequence of text (which we’ll refer to as a “context”), and output a sequence of “logits” the same length as the context. These logits represent, at each position, the model’s prediction for the next token. At each position, there is one logit value per entry in our vocabulary; by taking a softmax over the logit vector, we can get a probability distribution over tokens.


FDA Document Details Cyber Expectations for Device Makers

"The structure of the guidance document has changed to align with a secure product development framework and associated ties to the quality system regulations," she says. The FDA also removed "risk tiers" that were contained in previous 2018 draft guidance. "The cybersecurity of the healthcare sector depends on the cybersecurity of all medical devices," according to Schwartz. "To ensure that all manufacturers are appropriately addressing cybersecurity risks, the FDA recommends that all manufacturers provide the requested cybersecurity information; however, the amount of cybersecurity documentation is expected to scale with the cybersecurity risk of the device." Also, the new draft guidance - unlike the draft issued in 2018 - does not refer to "cybersecurity bill of materials," but instead refers to "software bills of materials," she says. "The primary difference between a CBOM and an SBOM, as outlined, is that CBOM also includes hardware. SBOM includes firmware, which is a type of software." 


4 tips for transitioning into an IT management role

Micromanagement is about mistrust. The micromanager believes that they can do things better or faster than anyone else. What micromanagers usually fail to understand is that their behavior causes long-term problems. Team members of micromanagers often feel demoralized. They begin to question their purpose at work and whether their boss values their input. Some employees kick back and ride the wave, figuring their manager will make corrections regardless of what they do. Others look to escape. Meanwhile, the micromanager is stressed out because there aren’t enough hours in the day to do their job and everyone else’s. It usually takes an intervention to get these leaders back on track. Reformed micromanagers usually have experienced an epiphany. Perhaps they’ve received a 360-degree assessment that reveals their behavior, or perhaps someone they respect calls them out on their conduct. These leaders eventually realize that employee engagement depends entirely on the very trust they’re eroding.


Accommodating the influx of data in the metaverse

One of the foundational pillars to enable the metaverse is more efficient and less energy-hungry data compression. As XR technologies advance and become more mainstream, the metaverse needs to accommodate higher resolution displays and higher streaming quality, for both video feeds and volumetric objects, to allow its users to completely immerse themselves. By reducing the mammoth file sizes needed, businesses can conserve storage capacity and power, and minimise the need to expand their infrastructure to cope. They can also effectively manage the growing volumes of data from XR devices without compromising on viewer quality. The low-complexity coding enhancement standard, MPEG-5 LCEVC (LCEVC), is an example of technology ideally suited to metaverse applications. It allows highly efficient compression of low-latency video feeds, making higher quality streaming in the new XR reality possible and mass adoption more feasible. LCEVC also offers various multi-layering features which are ideal to video streaming and rendering within a complex 3D space, swiftly displaying and updating the image pixels without any apparent lag for the user.


Organizations underestimating the seriousness of insider threats

“Despite increased investment in cybersecurity, organizations are focused more on protecting themselves from external threats than paying attention to the risks that might be lurking within their own network,“ says Chris Waynforth, AVP Northern Europe at Imperva. “Insider threats are hard to detect because internal users have legitimate access to critical systems, making them invisible to traditional security solutions like firewalls and intrusion detection systems. The lack of visibility into insider threats is creating a significant risk to the security of organization’s data.” The main strategies currently being used by organizations in EMEA to protect against insider threats and unauthorized usage of credentials are periodical manual monitoring/auditing of employee activity (50%) and encryption (47%). Many are also training employees to ensure they comply with data protection/data loss prevention policies (65%). Despite these efforts, breaches and other data security incidents are still occurring and 56% of respondents said that end users have devised ways to circumvent their data protection policies.



Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor

Daily Tech Digest - April 07, 2022

Researchers Identify ‘Master Problem’ Underlying All Cryptography

In the absence of proofs, cryptographers simply hope that the functions that have survived attacks really are secure. Researchers don’t have a unified approach to studying the security of these functions because each function “comes from a different domain, from a different set of experts,” Ishai said. Cryptographers have long wondered whether there is a less ad hoc approach. “Does there exist some problem, just one master problem, that tells us whether cryptography is possible?” Pass asked. Now he and Yanyi Liu, a graduate student at Cornell, have shown that the answer is yes. The existence of true one-way functions, they proved, depends on one of the oldest and most central problems in another area of computer science called complexity theory, or computational complexity. This problem, known as Kolmogorov complexity, concerns how hard it is to tell the difference between random strings of numbers and strings that contain some information. ... The finding suggests that instead of looking far and wide for candidate one-way functions, cryptographers could just concentrate their efforts on understanding Kolmogorov complexity. “It all hinges on this problem,” Ishai said. 


4 Reasons Decentralized Business Management Is Booming

Organizations face employee churn all the time, whether due to a lack of challenging work or dissatisfaction with the company's overall direction. Both of these reasons are interconnected. An inflexible organizational hierarchy leaves employees fighting to impress their managers instead of creating revenue-generating assets. With power consolidated in the hands of a few, leadership skills are scarce. Thus, when top-level executives move on, the company faces a tough time replacing those who departed and must engage resources to locate and vet suitable leadership. Promoting from within is ideal because long-term employees understand the company and its products well. They've witnessed the company's processes from the ground up, which makes them ideal leaders. However, centralized organizations don't provide low-level employees with the opportunity to ascend to leadership roles. A decentralized organization forces employees to act as leaders. Thanks to greater autonomy and priority on responsiveness, employees must act decisively. Intrapreneurship increases, promoting creativity, and the organization is energized.


DeFi can breathe new life into traditional assets

Tokenization of commodities enables blockchain-based ownership of a physical asset, which is essentially just a decentralized version of an already-existing practice in traditional finance. Tokenized precious metals are somewhat similar conceptually to a share in a gold exchange-traded fund (ETF), as they represent the investor’s stake in physical gold stored elsewhere and largely work toward the same purpose. Projects like VNX offer digital ownership of tokenized commodities that are backed by physical assets including gold, giving the investor the same benefits as investing in physical gold but have the versatility of a crypto asset on top of that. Stablecoins are also a viable option, allowing investors to reap the benefits of decentralization while maintaining the security of traditional finance. Backing from fiat and other real-world assets removes the common fear that crypto has no basis. Stablecoins like TrustToken (TUSD) grant investors more certainty and flexibility, lowering the stakes for any user by enabling easy redeeming of their funds at any given moment.


Chinese APT Targets Global Firms in Monthslong Attack

The campaign, which began in October 2019, targeted Japanese firms and their subsidiaries in 17 locations across the world, Symantec said in its report. The focus of the campaign was to exfiltrate data, particularly from automotive organizations, as part of an industrial cyberespionage effort. The APT group was then using a custom malware variant called Backdoor.Hartup as well as "living off the land" tools to target its victims. Once the victim's network was compromised, the hackers remained active for up to a year to exfiltrate data. Cicada then used a Dynamic Link Library side-loading technique to compromise the victims' domain controllers and file servers. "Various tools (were) deployed in this campaign, and Cicada’s past activity indicates that the most likely goal of this campaign is espionage. Cicada activity was linked by U.S. government officials to the Chinese government in 2018," the latest report says. Upon successfully gaining access to victim machines, the Symantec researchers observed APT actors deploying a custom loader and the SodaMaster backdoor. 


First malware targeting AWS Lambda serverless platform disclosed

The researchers have dubbed the malware “Denonia” — the name of the domain that the attackers communicated with — and say that it was utilized to enable cryptocurrency mining. But the arrival of malware targeting AWS Lambda suggests that cyberattacks against the service that bring greater damage are inevitable, as well. Cado Security said it has reported its findings to AWS. In a statement in response to an inquiry about the reported malware discovery, AWS said that “Lambda is secure by default, and AWS continues to operate as designed.” ... Cado Security cofounder and CTO Chris Doman said that businesses should expect that serverless environments will follow a similar threat trajectory to that of container environments, which he noted are now commonly impacted by malware attacks. Among other things, that means that threat detection in serverless environments will need to catch up, Doman said. “The new way of running code in serverless environments requires new security tools, because the existing ones simply don’t have that visibility. They won’t see what’s going on,” Doman said. “It’s just so different.”


Why We’re Porting Our Database Drivers to Async Rust

Similar to the way Python relies on modules compiled in C to make other modules less unbearably slow faster, our CQL drivers could benefit from a Rust core. A lightweight API layer would ensure that the drivers are still backward compatible with their previous versions, but the new ones will delegate as much work as possible straight to the Rust driver, trusting that it’s going to perform the job faster and safer. Rust’s asynchronous model is a great fit for implementing high-performance, low-latency database drivers because it’s scalable and allows high concurrency in your applications. Contrary to what other languages implement, Rust abstracts away the layer responsible for running asynchronous tasks. This layer is called runtime. Being able to select, or even implement, your own runtime is a powerful tool for developers. After careful research, we picked Tokio as our runtime due to its active open source community, focus on performance; rich feature set, including complete implementation for network streams, timers, etc., and lots of fantastic utilities like tokio-console.


How David Chaum Went From Inventing Digital Cash to Pioneering Digital Privacy

Shocked by the surveillance operations exposed by Edward Snowden, Chaum refined the mixing technologies developed at the end of the 1970s to provide untraceable message sending, using sophisticated cryptography not only to encrypt the content of message but to hide the identity of the user by eliminating the "metadata" of who sends messages to whom, how often and from where. Chaum is horrified by the promises of “end-to-end” message content encryption offered by companies such as Meta (formerly Facebook.) It leaves user metadata intact, which means it can still be harvested and sold, he warns. “It's criminal. It's exploitative of the public in the worst way,” says Chaum. “Because the real value in the information is the traffic data,” and “the sender's social graph and its relation to the timing of events,” he says—it could be used to predict our behavior and to further political ends (as was the case in the Cambridge Analytica scandal).


Reproducibility in Deep Learning and Smooth Activations

The Smooth reLU (SmeLU) activation function is designed as a simple function that addresses the concerns with other smooth activations. It connects a 0 slope on the left with a slope 1 line on the right through a quadratic middle region, constraining continuous gradients at the connection points (as an asymmetric version of a Huber loss function). SmeLU can be viewed as a convolution of ReLU with a box. It provides a cheap and simple smooth solution that is comparable in reproducibility-accuracy tradeoffs to more computationally expensive and complex smooth activations. The figure below illustrates the transition of the loss (objective) surface as we gradually transition from a non-smooth ReLU to a smoother SmeLU. A transition of width 0 is the basic ReLU function for which the loss objective has many local minima. As the transition region widens (SmeLU), the loss surface becomes smoother. If the transition is too wide, i.e., too smooth, the benefit of using a deep network wanes and we approach the linear model solution — the objective surface flattens, potentially losing the ability of the network to express much information.


The security implications of the hybrid working mega-trend

Ultimately, any high-level security model really breaks down into a trust issue: Who and what can I trust? – the employee, the devices, and the applications the employee is trying to connect to. In the middle is the network, but today, more often than not, the network is the internet. Think about it. Employees sit in coffee shops and log onto public browsers to access their email. So now what organisations are looking for is a secure solution for their applications, devices, and users. Every trusted or ‘would-be trusted’ end-user computing device has security software installed on it by the enterprise IT department. That software makes sure the device and the user who is on the device is validated, so the device becomes the proxy to talk to the applications on the corporate network. So now the challenge lies in securing the application itself. Today’s cloud infrastructure connects the user directly to the application, so there is no need to have the user connect via an enterprise server or network. The client is always treated as an outsider, even while sitting in a corporate office.


The Principles of Test Automation

The only way to reliably find errors is to build a comprehensive automated test suite. Tests can check the whole application from top to bottom. They catch errors before they can do any harm, find regressions, and run the application on various devices and environments at a scale that is otherwise prohibitively expensive to attempt manually. Even if everyone on the team was an exceptionally clever developer that somehow never made a mistake, third-party dependencies can still introduce errors and pose risks. Automated tests can scan every line of code in the project for errors and security issues. ... Some tests start their lives as manual tests and get automated down the road. But, more often than not, this results in overcomplicated, slow, and awkward tests. The best results come when tests and code have a certain synergy. The act of writing a test nudges developers to produce more modular code, which in turn makes tests simpler and more granular. Test simplicity is important because it’s not practical to write tests for tests. Code should also be straightforward to read and write. Otherwise, we risk introducing failures with the test themselves, leading to false positives and flakiness.



Quote for the day:

"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward