Daily Tech Digest - September 03, 2020

What is an office for now?

Working from home does work for a lot of people; I’ve been working from home since way before it was cool. But it can be terrible — isolating and uncomfortable, with blurred boundaries that make it too easy to keep working well past “office hours” but equally too easy to drift away from your desk to load the dishwasher. One survey on working from home, conducted by the Institute for Employment Studies in the U.K. early in its lockdown, found that more than half of respondents reported new musculoskeletal complaints, including neck and back pain, while their diet and exercise suffered. Many of them said they slept less and worried more. ... Additionally, asking employees to turn their home into an office makes employers more responsible for what happens there, while simultaneously making it more difficult to assess worker well-being. “I’ve spent a lot of my time making sure that people are OK in a way that you can do very, very swiftly in the office,” Sam Bompas, director at Bompas & Parr, a London-based experience design studio with approximately 20 employees, told me. “In the same way that for children, school provides an important social security function, if there’s anything wrong in [employees’] personal life, the office can do that as well.”


Most IoT Hardware Dangerously Easy to Crack

One of the easiest methods is to gain access to UART, or Universal Asynchronous Receiver/Transmitter, a serial interface used for diagnostic reporting and debugging in all IoT products, among other things. An attacker can use the UART to gain root shell access to an IoT device and then download the firmware to learn its secrets and inspect for weaknesses. "UART is only supposed to be used by the manufacturer. When you get access to it, in most cases you get complete root access," Rogers said. Protecting access to UART, or at least configuring it against interactive access, should be a fairly straightforward task for manufacturers; however, most don't make the effort. "They simply allow you to have complete interactive shell. It is the easiest way to hack every piece of IoT hardware," Rogers noted. Several devices even have UART pin names labeled on the board so it is easy to find the interface. Multiple tools are available to help find them if they are not labeled. Another, only slightly more challenging, route to completely pwning an IoT device is via JTAG, a microcontroller-level interface that is used for multiple purposes including testing integrated circuits and programming flash memory. 


Principles for Microservice Design: Think IDEALS, Rather than SOLID

The goal of interface segregation for microservices is that each type of frontend sees the service contract that best suits its needs. For example: a mobile native app wants to call endpoints that respond with a short JSON representation of the data; the same system has a web application that uses the full JSON representation; there’s also an old desktop application that calls the same service and requires a full representation but in XML. Different clients may also use different protocols. For example, external clients want to use HTTP to call a gRPC service. Instead of trying to impose the same service contract (using canonical models) on all types of service clients, we "segregate the interface" so that each type of client sees the service interface that it needs. How do we do that? A prominent alternative is to use an API gateway. It can do message format transformation, message structure transformation, protocol bridging, message routing, and much more. A popular alternative is the Backend for Frontends (BFF) pattern. In this case, we have an API gateway for each type of client -- we commonly say we have a different BFF for each client, as illustrated in this figure.


Ethical and professional data science needed to avoid further algorithm controversies

Identifying weaknesses in the attempts to ensure objectivity, the BCS report also said there is a need for clarity around what information systems are intended to achieve at the individual level, and that this should be established “right at the start” of the process. For example, distributing grades based on the characteristics of different cohorts of students so they are statistically in line with previous years – which is what the Ofqual algorithm did – is different to ensuring each individual student is treated as fairly as possible, something which should have been discussed and understood by all stakeholders from the beginning, it said. In terms of accountability, BCS said: “It is essential to develop effective mechanisms for the joint governance of the design and development of information systems right at the start.” Although it refrained from apportioning blame, it added: “The current exam-grading situation should not be attributed to any single government department or office.” CEO of the RSS, Stian Westlake, however, told Sky News the results fiasco was “a predictable surprise” because of DfE’s demand that Ofqual reduce grade inflation.


Why you shouldn’t mistake AI for automation

AI and automation cannot be mistaken for the same thing—where there’s automation, there is no requirement that artificial intelligence is involved. Indeed, automation has been around for centuries, far longer than we’ve had computers: traditional milling used water wheels to automate manual processes that human labor would otherwise have been required for. Water spins the wheel, which turns the millstone—an automated process that’s decidedly unintelligent. Simple automation has been the cornerstone of many businesses for years. For example, a process of sending out invoices may be automated once inputs into spreadsheets have been confirmed by people in the accounts department. Automation means that machines are replicating human tasks. But AI demands that the machines are also replicating human thinking. This means programming that can reflect on its own procedures and make decisions outside the scope of its own programming. Ultimately, machine learning requires a machine to react dynamically to changing variables. This is a fundamentally different objective to automation, which is essentially about teaching machines to perform repetitive tasks with predictable inputs. For this reason, applying machine learning to any automated process may be a case of overengineering.


Convert PDFs to Audiobooks with Machine Learning

When you look at a research paper, it’s probably easy for you to gloss over the irrelevant bits just by noting the layout: titles are large and bolded; captions are small; body text is medium-sized and centered on the page. Using spatial information about the layout of the text on the page, we can train a machine learning model to do that, too. We show the model a bunch of examples of body text, header text, and so on, and hopefully it learns to recognize them. This is the approach that Kaz, the original author of this project, took when trying to turn textbooks into audiobooks. Earlier in this post, I mentioned that the Google Cloud Vision API returns not just text on the page, but also its layout. ... The book Kaz was converting was, obviously, in Japanese. For each chunk of text, he created a set of features to describe it: how many characters were in the chunk of text? How large was it, and where was it located on the page? What was the aspect ratio of the box enclosing the text (a narrow box, for example, might just be a side bar)? Notice there’s also a column named “label” in that spreadsheet above. That’s because, in order to train a machine learning model, we need a labeled training dataset from which the model can “learn.” 


Zero-trust framework ripe for modern security challenges

Adopting a zero-trust security model is not an overnight process. "Younger companies with advanced architectures and less legacy equipment have an advantage since they are already utilizing new technology and are up to speed on new technology," said Pete Lindstrom, vice president of security research with IDC's IT Executive Program. Legacy infrastructure is an obstacle companies face when trying to shift to a zero-trust approach. A common yet misguided course of action is to conduct a massive overhaul of security infrastructure. "Companies often make the mistake of trying to boil the ocean and go way too broad in scope," Cunningham said. "They should focus in on granular things they can achieve one at a time, like enabling multifactor authentication, remote access control and disabling file shares." Since zero-trust security is a hot buzzword, businesses should be wary in terms of how they evaluate potential vendors since many like to pitch their products as zero trust when they really aren't. "Rule No. 1: Companies should make sure the vendor is using zero trust [in its own network] so they are buying something from someone who understand their pains," Cunningham said.


.NET CLI Templates in Visual Studio

One of the values of using tools for development is the productivity they provide in helping start projects, bootstrapping dependencies, etc. One way that we’ve seen developers and companies deliver these bootstrapping efforts is via templates. Templates serve as a useful tool to start projects and add items to existing projects for .NET developers. Visual Studio has had templates for a long time and .NET Core’s command-line interface (CLI) has also had the ability to install templates and use them via `dotnet new` commands. However, if you were an author of a template and wanted to have it available in the CLI as well as Visual Studio you had to do extra work to enable the set of manifest files and installers to make them visible in both places. We’ve seen template authors navigate to ensuring one works better and that sometimes leaves the other without visibility. We wanted to change that. Starting in Visual Studio 16.8 Preview 2 we’ve enabled a preview feature that you can turn on that enables all templates that are installed via CLI to now show as options in Visual Studio as well. 


How to predict new consumer behaviour in the Covid-19 era

Keeping tabs on what consumers are buying is the easiest way to get your data – predicting which products will grow and which won’t is where the gold is. While some product changes will be obvious — it’s unsurprising that purchase of medical supplies and non-perishable foodstuffs has increased — a 652% rise in the purchase of bread machines suggests that we don’t quite have the skills of Paul Hollywood just yet. There is also insight to be had in observing the products which have decreased in popularity over lockdown. Camera sales reduced by 64% over the previous 4 months. As social events such as holidays, birthdays and weddings were cancelled, so was the need to bag a new ‘social accessory’ for the occasion. Think about how your product suite fits around these trends and whether these trends are short term reactions, or long term shifts in behaviour. Can you scale back on a certain line of products or diversify your range to meet a new product demand? A shift to working — and playing — from home has driven significant demand for new purchases. With 43% of adults now working from home, companies that can help transform our homes into multipurpose activity hubs are rising in popularity.


How to make complicated machine learning developer problems easier to solve

Many of the difficulties in building efficient AI companies happen when facing long-tailed distributions of data….It's becoming clear that long-tailed distributions are also extremely common in machine learning, reflecting the state of the real world and typical data collection practices…. Current ML techniques are not well equipped to handle [long-tail distributions of data]. Supervised learning models tend to perform well on common inputs (i.e. the head of the distribution) but struggle where examples are sparse (the tail). Since the tail often makes up the majority of all inputs, ML developers end up in a loop--seemingly infinite, at times--collecting new data and retraining to account for edge cases. And ignoring the tail can be equally painful, resulting in missed customer opportunities, poor economics, and/or frustrated users. Unfortunately, the answer isn't to throw more computational horsepower or data at the problem. The very problem of disparate data across diverse customer inputs contributes to diseconomies of scale, whereby it may cost 10X more (in terms of data, infrastructure, and more) to generate a 2X improvement.



Quote for the day:

“Our greatest glory is not in never failing, but in rising up every time we fail.” -- Ralph Waldo Emerson 

Daily Tech Digest - September 02, 2020

Building a viable IT budget for 2021 in a time of uncertainty: Seven critical steps

In 2021, IT budget spends will be diversified over a broader range of categories (digitalization, mobile computing, employee training, for example) than in 2020, when IT budgets were heavily invested in security and cloud services. Security and cloud services will still lead investment categories, but organizations have reached an inflection point and feel they have attained many of their initial goals in these areas. End users will continue to be engaged in technology decision making. However, there are indications that more organizations want to fully understand just how much they spend on IT across the company. From a budgetary standpoint, this has sparked a movement to consolidate more of the IT spend (and assets) under a single umbrella, with IT in charge. Also in 2021, CFOs and other technology budget decision-makers will expect more input from successful trials and proofs of concept before they agree to fund new technology. This is in response to the mixed performance of ROI formulas, and also to cost overruns, which have routinely occurred with cloud services. That's not all. Below are seven additional budget forecasts that IT budget planners should take into account before building a 2021 IT budget.


Improvements in native code interop in .NET 5.0

With .NET 5 scheduled to be released later this year, we thought it would be a good time to discuss some of the interop updates that went into the release and point out some items we are considering for the future. As we start thinking about what comes next, we are looking for developers and consumers of any interop solutions to discuss their experiences. We are looking for feedback about interop scenarios in general – not just those related to .NET. If you have worked in the interop space, we’d love to hear from you on our GitHub issue. Some items mentioned in this post are Windows-specific (COM and WinRT). In those cases, ‘the runtime’ refers only to CoreCLR. ... C# function pointers will be coming to C# 9.0, enabling the declaration of function pointers to both managed and unmanaged functions. The runtime had some work to support and complement the interop-related parts of the feature. ... C# function pointers provide a performant way to call native functions from C#. It makes sense for the runtime to provide a symmetrical solution for calling managed functions from native code. UnmanagedCallersOnlyAttribute indicates that a function will be called only from native code, allowing the runtime to reduce the cost of calling the managed function.


Ducati Motors to leverage IT transformation from Aruba and Lenovo

“Using the latest and most advanced technologies is part of Ducati’s DNA,” said Konstantin Kostenarov, chief technology officer at Ducati. “Relying on the best technologies made available through our partners has significantly contributed to the overall improvement of processes, while at the same time increasing the value of the results achieved. “The choices made two years ago and the projects that have been carried out since then have allowed us to tackle the various complexities of this sport in the most effective way possible.” Giorgio Girelli, general manager of Aruba Enterprise, commented: “Among the technologies that have emerged as a result of Covid-19, the cloud is undoubtedly one that has proven its worth and made it possible to better face crisis situations. “An internal commissioned survey reveals that 59% of those who were able to use cloud solutions during emergency situations considered its use to be fundamental to their operations. “The sharing and combination of the latest technologies between the three companies involved has given life to a very innovative project focused on one goal: obtaining maximum performance.”


Leveraging AI to Deliver a Personalized Experience in the New Normal

It is key to understand how different subscribers perceive different experiences while gaming, attending a smart venue or traveling virtually. Each of these experiences will vary for different individuals: e.g. a man in his 30s who works from home versus a teenager who moves around the city. These experiences need to be predicted across various touch points, such as OTT game apps or smart venues, the network, call center, retail, and billing. It is also crucial to proactively identify anomalies and factors contributing to a negative experience or positive experience in order to act fast to resolve issues before they impact gaming customers, or to target the right customers at the optimal time for an add-on purchase in a smart venue. The application of AI and ML brings intelligent insights that are more precise than those produced by existing processes and systems, and enables the CSP to predict changes or anomalies in their customers’ experiences. AI and ML enable the possibility to look at each subscriber based on their individual profile, including demographics, device used or mobility to predict the experience more accurately, and taking into account the individual sensitivities, biases and expectations. The insights software learns with changing dynamics either at the CSPs network, customer segment or market and adapts predictions accordingly.


To build responsibly, tech needs to do more than just hire chief ethics officers

Just like the early days of digital, ethics can seem complex and remote. Remember thinking, “The internet will never be big enough to disrupt my industry? It can be tempting to assume you need a Ph.D. to debate complex topics like algorithmic bias or exclusion, especially as many of those chief ethics officers have those deep credentials and expertise. Even though tech fancies itself as an industry that welcomes new types of talent and thinking, credentialism is more part of the industry culture than we think – or admit. (If you’re questioning that, just think about how popular it is to put ex-employers in your Twitter biography.) Unless you work on ethics full time or you’re a product VP, it’s easy to feel that you have no say or no role in your company’s commitment to social responsibility, especially if you’re underrepresented at your company or speaking up puts you at risk. Ethical leaders play a powerful central role in coordinating, setting standards and creating incentives, but they wouldn’t want to be the only ones to own this work, either. Responsibility’s a muscle we build and practice. Doing the right thing isn’t a one-off action, but a commitment to values that inform day-to-day behaviors and decisions. So we need to create structures that ensure company values are embedded in roles across the board.


What Is Resilience Engineering?

Resilience engineering today isn’t thought of as a function. However, just as DevOps was a description of culture before it was a role and site reliability was an extension of operations before it was a focus, I wouldn’t be surprised if resilience engineering became a function in the new future. The first question most will ask however is, “Isn’t this just SRE?” The purpose of the term is to change the focus from simply reacting to incidents to developing long-term response strategies for them. Because the expectation in these environments is that things will break, resilience is the responsibility of existing DevOps and cloud operations teams. When applications and services do break, a “fly by the seat of your pants” response strategy will not work. Resilience engineering, while rooted in engineering practices, is largely focused on building strategies and a framework for their execution. This leaves the process of building resilience into a largely unestablished system in part because each system is unique. And, how you respond to issues in that system will likely be unique, even if the management plane that reports issues is not. ... For most, the best part of resilience engineering is taking what is learned from previous incidents and finding ways to automate future resolution.


Sustainability Through a Better 5G

Ericsson talks about ‘breaking the energy curve’ by providing products and solutions that simply use less energy and are the practical choice for companies striving to make a sustainable shift in their digital transformation journey. Swapping old radio equipment with 5G-ready Ericsson Radio System equipment nationwide enables service providers to serve 5G use cases with a single software upgrade and can also save them up to 30 percent on their energy consumption . For some operators these savings equate to paying back the investment made on modernization within just three years – who says sustainability does not go together with business goals? Looking to the future of work and travel post-coronavirus, it’s clear that our global mindset has shifted and that we can’t just go back to the way things were before. It’s all about connectivity, especially during these challenging times where keeping in touch with loved ones, essential services and businesses is more important than ever. The next era will witness technology not only serving our needs to stay connected but also enabling a more inclusive and sustainable world. With a focus on real-time data built upon a framework of sustainability, Ericsson have successfully architected a 5G-aware traffic management solution with AI embedded in its RAN Compute software.


Working from home: The 12 new rules for getting it right

Remote working doesn't change some elements of corporate professionalism. "Don't expect that colleagues, clients, and managers should always be easygoing in terms of dress code, tone of voice and punctuality in the remote workplace," Herman Tse, professor in the department of management at Monash Business School, tells ZDNet. And although there is a screen now separating you from your colleagues, don't take this as an opportunity to prudently check emails or scroll Twitter during a video call, because others can tell when you are multi-tasking, even if virtually. You wouldn't check your phone in front of a co-worker giving an in-person presentation – so there is no reason to act differently online. With 30-minute slots being the default option when setting up a calendar meeting, calls that could take a couple of minutes now last for much longer than necessary. "There is work that needs to be done around calendar norms," Sowmyanarayan adds. "Things that take two minutes should take two minutes." Before setting up a day full of half-hour meetings, therefore, remember how long those chats would have taken in an office. More often than not, you will find that a shorter call is far more appropriate.


App Trimming in .NET 5

Trimming sounds great, but as with most good things, there is a catch. The trimming does a static analysis of the code and therefore can only identify types and members when they are referenced from code. However .NET offers a great deal of dynamism, typically depending on reflection. For example, Dependency Injection in ASP.NET Core uses reflection to select appropriate constructors. This is largely transparent to the static analysis, so it needs to either be told about the required types or be able to detect common dynamism patterns – otherwise it will trim away code that is needed by the application which will result in runtime crashes. ... .NET 5 can take it two levels further and remove types and members that are not used. This can have a big effect where only a small subset of an assembly is used – for example, the console application above. Member-level trimming has more risk than assembly level trimming, and so is being released as an experimental feature, that is not yet ready for mainstream adoption. With assembly level trimming, its more obvious when a required assembly is missing, with member level trimming you need to have exhaustive testing of the app to ensure that nothing has been trimmed that could be required.


Q&A: CTO tips on delivering cloud innovation to avoid disruption

Make sure to develop and leverage an internal requirements matrix of what you are looking for. Be very clear about what you want and need from a particular cloud solution. Stack rank key priorities and progressively implement towards the long-term vision. Ask any vendor: How are things audited? Do they comply with privacy regulations such as GDPR? What technical support do they offer? Get a full picture of what the commitment is by the vendor. Deployments that are measured in quarters are too slow, companies need to think about how they can take advantage of the speed and control of cloud deployments and use an agile approach to incrementally transform. An important element to consider is the vendor’s application user rate and the holistic usability of any cloud applications. One of the most important things is usability and adaptability. Will this be easily adaptable to fit your company’s needs? Look at their roadmap and past innovations to get a better sense of their ability to push on innovation and support the ever-changing needs of various businesses. This will give you a better sense of their ability to adapt to the changing needs of your company. Start a dialogue with vendors about how you need to demonstrate results quickly.



Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer

Daily Tech Digest - September 01, 2020

UK government unveils next steps in digital identity plans

The Digital Identity Strategy Board’s six principles: Privacy – When personal data is accessed, people will have confidence that there are measures in place to ensure their confidentiality and privacy; for instance, a supermarket checking a shopper’s age, a lawyer overseeing the sale of a house, or someone applying to take out a loan; Transparency – When an individual’s identity data is accessed when using digital identity products, they must be able to understand by who, why and when; for example, being able to see how your bank uses your data through digital identity solutions; Inclusivity – People who want or need a digital identity should be able to obtain one; Interoperability – Setting technical and operating standards for use across the UK’s economy to enable international and domestic interoperability; Proportionality – User needs and other considerations, such as privacy and security, will be balanced so that digital identity can be used with confidence across the economy; and Good governance – Digital identity standards will be linked to government policy and law. Any future regulation will be clear, coherent and align with the government’s wider strategic approach to digital regulation.


Iranian Hackers Using LinkedIn, WhatsApp to Target Victims

By personalizing the campaign and using these social media platforms, the attackers attempt to gain the victims' trust and coax them into opening the malicious links embedded in follow-up emails, according to the report. Charming Kitten, also known as APT35, Phosphorous and Ajax, is one of Iran's top state-sponsored hacking groups. While the group's tactic of impersonating journalists is not new, ClearSky researchers say the latest campaigns are the first time the threat actors used mediums other than email or SMS to target their victims. "This is the first time we identified an attack by Charming Kitten conducted through WhatsApp and LinkedIn, including attempts to conduct phone calls between the victim and the Iranian hackers," the researchers note in the report. "These two platforms enable the attacker to reach the victim easily, spending minimum time in creating the fictitious social media profile. However, in this campaign, Charming Kitten has used a reliable, well-developed LinkedIn account to support their email spear-phishing attacks." ... Charming Kitten has been targeting journalists and activists since at least 2013.


Dealing with sovereign data in the cloud

Data sovereignty is more of a legal issue than a technical one. The idea is that data is subject to the laws of the nation where it’s collected and exists. Laws vary from country to country, but the most common governance you’ll see is not allowing some types of data to leave the country at any time. Other regulations enforce encryption and how the data is handled and by whom. These were pretty easy rules to follow when we had dedicated data centers in each country, but the use of public clouds that have regions and points-of-presence all over the world complicates things. Misconfigurations, lack of understanding, and just general screw-ups lead to fines, impacts to reputations, and, in some cases, disallowing the use of cloud computing altogether.  Some best practices are emerging to deal with data sovereignty in the cloud. Data governance systems are worth their weight in gold. When dealing with regulations that are bound to data, these systems will keep you out of trouble since they won’t allow humans to violate data policies that are set to reflect the law of the land where the data resides. Training is another critical point. Most of the data sovereignty issues can be traced to human error. Everyone handing the data should be knowledgeable on the regulations. Many countries mandate this.


How IoT is helping cities become more sustainable than ever before

Sensor-enabled devices have been helping to monitor the environmental impact of cities for some time, collecting details about sewers, air quality, and garbage. Recently, air pollution has been a big pain point in cities, such as London, Paris and Rome, where it is regularly cited as one of the most serious environment problem which could affect health today. To address this, many are turning to Air Quality Eggs (AQEs), which are open-source IoT platforms for air pollution. In simple terms, this is an open system that collates citizen-contributed data on air quality. ... Connected technologies are also helping to increase awareness and visibility into individual energy and resource usage. Smart energy meters provide city livers with transparent data on their own energy consumption, which has been shown to reduce consumption across the board. Today, connected smart thermostats can also be used to integrate with heating systems so that clear cut decisions can be made on when to turn the heating on based on fluctuating energy costs. Moreover, smart IoT water management sensors can be combined with data analytics programmes to provide consumers with increased visibility into the amount of water they use.


Overcoming the challenges of machine learning at scale

As with any emerging technology, another challenge is ensuring a positive return on investment with respect to business objectives. Success requires adjustments to both process and culture. “Organizations that are serious about scaling machine learning and bringing more models from the lab to production are investing in the processes, tools, and skills to support model management and operations,” said Isaac Sacolick, President of StarCIO and author of Driving Digital. “Organizations should start with high-value and easy-to-execute experiments, but then must recognize that scaling requires an investment in an end-to-end machine learning lifecycle.” Tim Crawford , CIO Strategic Advisor with AVOA, also emphasized the importance of process and culture. “First step, create a methodology and culture that supports ML and prioritizes how to engage ML,” he said. “Identifying the right projects, prioritizing, ensuring that you have enough good data and creating a culture that embraces ML across the enterprise.” A lack of alignment between ML projects and the business can hobble efforts to scale the technology, said Will Kelly, a technical writer.


Remote Work Has Law Firm Cybersecurity in a Fragile State

For even the most vigilant staff, homes are never going to be quite like offices. It’s too easy for someone to overhear sensitive information, and too much to expect that no one will ever use a personal email, chat tool or social media account to offer something that resembles legal advice. There are so many variables that can no longer be controlled. One firm has gone so far as to insist its lawyers switch off any smart device when on calls to certain clients lest an app listen in. Other firms have decided that certain apps should be banned altogether. Ropes & Gray banned its lawyers and staff from having social media app TikTok on devices that also receive work emails following privacy concerns from clients. And these are just the threats that have been discovered. Research by cybersecurity firm Tessian found that data loss incidents happen way more often than IT directors think. No wonder such people are constantly telling workers to take this stuff more seriously. Unfortunately, it is probably fair to say that there is only one thing that will really make people pay proper attention to their home working habits. And that is a major data breach hitting the headlines.


Is Covid-19 a Mental Health Tipping Point?

As more people remain at home in fear of COVID-19, it’s clear that the future of care is becoming increasingly digital. Even private insurers are stepping up, with most expanding their telehealth coverage, sometimes with no co-pay. This has been a windfall for digital behavioral health startups. Venture funding for this technology has reached unprecedented levels, with a record $588M raised during the first half of 2020 spurred by the pandemic. It’s clear that things will never be the same…and, in some ways, that’s a good thing. This shift has forced many companies to have difficult discussions about staff mental health and wellbeing that had previously been avoided. This new openness is helping employees feel more comfortable in acknowledging how they’re feeling – making it okay not to feel “okay.” This makes the role of managers more complicated and, more impactful than ever before. Yet, some may feel reticent to share their own feelings and/or be unable to manage what can easily become an emotionally charged discussion. And, at the same time, they may be suffering too. It is essential that companies ensure they have the training and support they need to, in turn, support their teams.


Underbanked households would benefit from a regulated blockchain

To be clear, distributed ledger technology is not a panacea, but its core attributes reinforce and strengthen essential controls required by regulators. First, the immutability of the ledger prevents participants within a network from changing or tampering with transactions once it has been recorded. Second, since the technology is decentralized, it provides greater transparency and decreases risk of important information being concentrated within one group or organization. Third, the encrypted nature of blockchain strengthens data privacy and security while enabling secure data-sharing between counterparties, including with regulators and law enforcement when necessary. Many financial institutions remain reluctant to incorporate blockchain tools into their payments or compliance operations. Skepticism from industry, regulators and policymakers has further dampened interest. Yet, essential financial products and services are increasingly being facilitated outside of the traditional banking system, often at a faster pace. Many of these new tools are accessible across borders, beyond a particular regulatory jurisdiction.


Cisco: Making remote users feel at home on the new enterprise network

“The fundamental shift is that we need to think about our people working from home, and the home networks they use, as the default network. What we want is to create a high-quality micro-branch office in your home,” said Greg Dorai, vice president of product management and strategy for Cisco’s Enterprise Infrastructure and Solutions Group. “Now we must consider every work-from-home worker and every one of their home offices as worthy of the same level of connectivity support as our company headquarters and branches.” Realistically every company cannot provide every worker with headquarters-level support for their home networks, but there are technologies available and coming in the near future that can address the different needs of different workers, Dorai said. In Cisco’s case a couple of new offerings address wireless and wide area networking connectivity for remote users. “For employees for whom best-effort connectivity isn’t enough, we can replace or augment their home-networking access point with a Wi-Fi router that acts as an extension of the corporate network,” Dorai said. “Home wireless access points, configured by company IT before the employee installs them, can provide advanced security and monitoring and prioritize bandwidth for applications that need it.”


Interview with RavenDB Founder Oren Eini

RavenDB works with JSON documents, so using JavaScript is a very natural way to work with the database. There are a few ways that you can work with JavaScript in RavenDB. RavenDB has a JS interpreter built-in (supporting ECMAScript 5.1 and large parts of 6) which can be used in queries and in patch operations. That gives you a lot of freedom to express what you want and apply logic on the database server. ... There are a few things that are on our roadmap that I am really looking forward to. For example, in RavenDB 5.1 we are going to come with replication support in Byzantine networks. This is useful when you have RavenDB nodes deployed in an environment where you don’t trust the remote nodes. A good example is when you need to integrate with a RavenDB instance that is running on a user’s machine, and you want to allow that user’s RavenDB instance access to some of the data in the cloud. That allows you to build systems that use RavenDB and collaborate, without needing to trust the remote locations. And conversely, the remote location doesn’t need to trust you. This will allow RavenDB to take on itself the role of synchronization between these locations.



Quote for the same:

"Remember teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni

Daily Tech Digest - August 31, 2020

How the DevOps model will build the new remote workforce

Most importantly, the humans managing systems ultimately determined the company's capacity to adapt to the pandemic. "We recognized that … the systems may need to scale, and we may need to make changes to meet a new global demand, [but] that is much different than how our peers, the people we care about and work with, are going to be impacted by this," Heckman said. Thus, the SRE team's role was not just to watch systems and shore up their reliability, but also to manage communications with other employees, Heckman said, "not only so they had the current context of what we were thinking, suggesting and where we were headed, but also to give them some confidence that the system around them would be fine." Similar principles must be applied to manage the human impact of a longer-term shift to remote work, said Jaime Woo, co-founder of Incident Labs, an SRE training and consulting firm, in a separate presentation at the SRE from Home event. "The answer is not 'just be stronger,'" for humans during such a transition, any more than it is for individual components in a distributed computing system under unusual traffic load, Woo said.


From Defense to Offense: Giving CISOs Their Due

CISOs are now in a position where they must — somehow — reinvent how they work and how they are perceived within their organizations. Historically, they have been the company's risk-averse first line of defense against cyberattacks, and have been viewed as such. But this state of affairs needs to evolve. "CISOs cannot afford to be seen as blockers of innovation; they must be problem-solvers," says Kris Lovejoy, EY Global Advisory Cybersecurity Leader, in EY's report. "The way we've organized cybersecurity is as a backward-looking function, when it is capable of being a forward-looking, value-added function. When cybersecurity speaks the language of business, it takes that critical first step of both hearing and being understood. It starts to demonstrate value because it can directly tie business drivers to what cybersecurity is doing to enable them, justifying its spend and effectiveness." But do current CISOs have the right skills and experience to work in this new way and serve in a more proactive and forward-thinking role? That's an open question, and the answer will probably demand a new breed of CISO whose job is not driven mainly by threat abatement and compliance.



Want an IT job? Look outside the tech industry

It's always been true that most software was written for use, not sale. Companies might buy its ERP software from SAP and office productivity software from Microsoft, but they were writing all sorts of software to manage their supply chain, take care of employees, and more. What wasn't true then, but is definitely true now, is just how much of that software spend is now focused on company-defining initiatives, rather than back-office software meant to keep the lights on. Small wonder, then, that in the past year companies have posted nearly one million jobs in the US, according to the Burning Glass data, which scours job postings. That number is expected to increase by more than 30% over the next few years, with non-tech IT jobs set to boom at a 50% faster clip than IT jobs within tech. As for who is hiring, though tech companies top the list (arguably one of them isn't really a tech company), the rest of the top 10 are decidedly non-tech. Digging into the Burning Glass report, and moving beyond software developer jobs, specifically, and into the broader category of IT, generally, Professional Services, Manufacturing, and Financial Services account for roughly half of all IT openings outside tech.


Data protection critical to keeping customers coming back for more

Despite the growing advancements on the data protection front, 51 percent of consumers surveyed said they are still not comfortable sharing their personal information. One-third of respondents said they are most concerned about it being stolen in a breach, with another 26 percent worried about it being shared with a third party. In the midst of the growing pandemic, COVID-19 tracking, tracing, containment and research depends on citizens opting in to share their personal data. However, the research shows that consumers are not interested in sharing their information. When specifically asked about sharing healthcare data, only 27 percent would share health data for healthcare advancements and research. Another 21 percent of consumers surveyed would share health data for contact tracing purposes. As data becomes more valuable to combat the pandemic, companies must provide consumers with more background and reasoning as to why they’re collecting data – and how they plan to protect it. ... As the debate grows louder across the nation, 73 percent of consumers think that there should be more government oversight at the federal and/or state/local levels.


The power of open source during a pandemic

The world needs to shift the way it's approaching problems and continue locating solutions the open source way. Individually, this might mean becoming connection-oriented problem-solvers. We need people able to think communally, communicate asynchronously, and consider innovation iteratively. We're seeing that organizations would need to consider technologists less as tradespeople who build systems and more as experts in global collaboration, people who can make future-proof decisions on everything from data structures to personnel processes. Now is the time to start building new paradigms for global collaboration and find unifying solutions to our shared problems, and one key element to doing this successfully is our ability to work together across sectors. A global pandemic needs the public sector, the private sector, and the nonprofit world to collaborate, each bringing its own expertise to a common, shared platform. ... The private sector plays a key role in building this method of collaboration by building a robust social innovation strategy that aligns with the collective problems affecting us all. This pandemic is a great example of collective global issues affecting every business around the world, and this is the reason why shared platforms and effective collaboration will be key moving forward.


Why Digital Transformation Always Needs To Start With Customers First

A fascinating point regarding Deloitte Insights’ research is the correlation it uncovered between an organization’s digital transformation maturity and the benefits they gain in efficiency, revenue growth, product/service quality, customer satisfaction and employee engagement. They found a hierarchy of pivots successful enterprises make to keep pursuing more agile, adaptive organizational structures combined with business model adaptability, all driven by customer-driven innovation. The most digitally mature organizations can adopt new frameworks that prioritize market responsiveness, customer-centricity and have analytics and data-driven culture with actionable insights embedded in their DNA. The two highest-payoff areas for accelerating digital maturity and achieving its many benefits are mastering data and creating more intelligent workflows. Deloitte Insights’ research team looked at the seven most effective digital pivots enterprises can make to become more digitally mature. The pivots that paid off the best as measured by revenue, margin, customer satisfaction, product/service quality and employee engagement combined data mastery and improving intelligent workflows.


A searingly honest tech CEO tells the truth about working from home

Morris believes that everyone has what she calls their "Covid Acceptance Curve." But no two employees' curves are likely to be alike. "Many possible solutions for one employee are actually counter-indicated for others," she says. "Think of your team as an overlapping series of waves, each strand representing a person and their curve. You could try to slot in a single solution across 'strands,' but it will inevitably miss so many marks, reaching people too late, too early or with something that isn't even relevant to them." Some might imagine that one of the particular failings of tech leadership is the temptation to treat all employees with one broad free lunch. There, that should please everyone. Now, says Morris of her employees: "Some are experimenting with how to juggle work and homeschooling, some are struggling with crippling isolation, some have been impacted by Covid personally, others are facing anxiety of so many kinds." I wonder whether it was always this way, but leaders didn't care so much. Each employee has always been burdened with their own practical and emotional issues not directly related to work. Now, though, it's the physical distance and the constant, lonely staring at screens that intensifies difficulties -- and leadership's ability to anticipate or even understand them.


How does open source thrive in a cloud world? "Incredible amounts of trust," says a Grafana VC

Cloud gives enterprises a "get-out-of-the-burden-of-maintaining-open-source free" card, but savvy engineering teams still want open source so as to "not lock themselves in and to not create a bunch of technical debt." How does open source help to alleviate lock-in? Engineering teams can build "a very modular system so that they can swap in and out components as technology improves," something that is "very hard to do with the turnkey cloud service." That's the technical side of open source, but there's more to it than that, Gupta noted. Referring to how Elastic ate away at Splunk's installed base, Gupta said, "The biggest reason...is there is a deep amount of developer love and appreciation and almost like an addiction to the [open source] product." This developer love is deeper than just liking to use a given technology: "You develop [it] by being able to feel it and understand the open source technology and be part of a community." Is it impossible to achieve this community love with a proprietary product? No, but "It's a lot easier to build if you're open source." He went on, "When you're a black box cloud service and you have an API, that's great. People like Twilio, but do they love it?"


Is Low-Code Or No-Code Development Suitable For Your Startup App Idea?

Speed and adaptability are key ingredients in every product development phase of a startup. Assume it will take you four months to create and launch the first version of your product. You spoke with potential customers, gathered, and implemented their feedback to create the best solution you could build based on the information you have. If those potential customers need your solution, they will be looking forward to it. And if they committed financially, they’re going to be even more eager to use it. The truth is that in a competitive market where buyers have many options, eagerness and patience are two different things. The customers may wish to use your product sooner than later but they will not wait for it. Even if they don’t have better options today, they will figure out an alternative solution. Now assume you launched your product, served the first customers and gathered some more critical feedback. Your customers will not wait months for those changes, no matter how important your product is for them. Speed and adaptability can make or break a startup.


Tackle.io's Experience With Monitoring Tools That Support Serverless

Tackle runs microservices such as managed containers on AWS Fargate, deploys its front end on Amazon CloudFront, and uses Amazon DynamoDB for its database, Wood says. “We’ve spent a lot of time making sure that our architecture is something scalable and allows us to provide value to our customers without interruption,” he says. Tackle’s clientele includes software and SaaS companies such as GitHub, PagerDuty, New Relic, and HashiCorp. Despite the benefits, Woods says running serverless can introduce such issues as trying to find obscure failures with APIs. “Once you adopt serverless, you’ll have a chain of Lambda functions calling each other,” he says. “You know that somewhere in that process was an error. Tracing it is really difficult with the tools provided out of the box.” Before adopting Sentry, Tackle spent a lot of engineering hours trying to discover the root cause of problems, Woods says, such as why a notification was not sent to a customer. “It might take half a day to get an answer on that.” Tackle adopted Sentry’s technology initially to get back traces on such errors. Woods says his company soon discovered Sentry also sends alerts for failures Tackle was not aware of in its web app.



Quote for the day:

"You can't lead anyone else further than you have gone yourself." -- Gene Mauch

Daily Tech Digest - August 30, 2020

'Lemon Duck' Cryptominer Aims for Linux Systems

The malware uses the infected computer to replicate itself in a network and then uses the contacts from the victim's Microsoft Outlook account to send additional spam emails to more potential victims, the report notes. "People are more likely to trust messages from people they know than from random internet accounts," Rajesh Nataraj, a researcher with Sophos Labs, notes. The malware contains code that generates email messages with dynamically added malicious files and subject lines pulled up from its database with phrases such as: "The Truth of COVID-19," "COVID-19 nCov Special info WHO" or "HEALTH ADVISORY: CORONA VIRUS," according to the report. Researchers found that Lemon Duck malware exploits the SMBGhost vulnerability found in versions 1902 and 1909 of the Windows 10 operating system. Exploiting this vulnerability allows for remote code execution. Microsoft fixed this bug in March, but unpatched systems remain at risk. The code used in Lemon Duck also leverages the EternalBlue vulnerability in Windows to help the malware spread laterally through enterprise networks.


Can AI Reimagine City Configuration and Automate Urban Planning?

While the concept of AI-enabled automated urban planning is appealing, the researchers quickly encountered three challenges: how to quantify a land-use configuration plan, how to develop a machine learning framework that can learn the good and the bad of existing urban communities in terms of land-use configuration policies, and how to evaluate the quality of the system’s generated land-use configurations. The researchers began by formulating the automated urban planning problem as a learning task on the configuration of land-use given surrounding spatial contexts. They defined land-use configuration as a longitude-latitude-channel tensor with the goal of developing a framework that could automatically generate such tensors for unplanned areas. The team developed an adversarial learning framework called LUCGAN to generate effective land-use configurations by drawing on urban geography, human mobility, and socioeconomic data. LUCGAN is designed to first learn representations of the contexts of a virgin area and then generate an ideal land-use configuration solution for the area.


AT&T Waxes 5G Edge for Enterprise With IBM

As enterprises increasingly shift to a hybrid-cloud model, IBM is working with AT&T and other operators to allow businesses to deploy applications or workloads wherever they see fit, Canepa said. “That includes now what we’re highlighting here, the mobile edge environment that comes with this, the emerging 5G world.” Because enterprises are no longer restricted to a single cloud architecture on premises, they’re gaining access to a larger pool of potential innovation sources, he explained. This extends to mobile network operators’ infrastructure as well. “Up until this point, the networks inside the telcos were very kind of structured environments, hardwired, specialized equipment that was really good at what it did, but did a fairly limited set of things,” Canepa said. “What we’re evolving to now is truly a hybrid-cloud environment where that network itself becomes a platform. And then the ability to extend that platform to the edge creates a whole new opportunity to create new insights as a service, new applications, and solutions that can be deployed in that environment.”


Databricks Delta Lake — Database on top of a Data Lake

The most challenging was the lack of database like transactions in Big Data frameworks. To cover for this missing functionality we had to develop several routines the performed the necessary checks and measures. However, the process was cumbersome, time-consuming and frankly error-prone. Another issue that use to keep me awake at night was the dreaded Change Data Capture (CDC). Databases have a convenient way of updating records and showing the latest state of the record to the user. On the other hand in Big Data we ingest data and store them as files. Therefore, the daily delta ingestion may contain a combination of newly inserted, updated or deleted data. This means we end up storing the same row multiple times in the Data Lake. ... Developed by Databricks, Delta Lake brings ACID transaction support for your data lakes for both batch and streaming operations. Delta Lake is an open-source storage layer for big data workloads over HDFS, AWS S3, Azure Data Lake Storage or Google Cloud Storage. Delta Lake packs in a lot of cool features useful for Data Engineers.


Developing a scaling strategy for IoT

“One of the most often overlooked or under budgeted issues of IoT scaling is not the initial build out of the system which is typically well planned for, but the long-term maintenance and support of what can quickly become a huge network of devices that are often deployed in difficult to reach locations,” he said. “That complexity requires a resilient network to ensure that all of these IoT devices, connected via an aggregation point, can be securely managed and updated to extend their lifespan. Where edge compute is necessary due to the density of connected IoT devices, it is also advisable to provide scalable, secure and highly reliable remote management for all the IoT network infrastructure that provides a fast and predictable way to recover from failures. “An independent management network should provide a secure alternate access path, including the ability to quickly re-deploy any software and or configs automatically onto connected equipment if they need to be re-built, ideally without having to send an engineer to site. In general networking terms, it is very important to ensure that the IoT gateways and edge compute equipment stack is actively monitored and that it is designed with resiliency in mind.”


Creating The Vision For Data Governance

The first step in every successful data governance effort is the establishment of a common vision and mission for data and its governance across the enterprise. The vision articulates the state the organization wishes to achieve with data, and how data governance will foster reaching that state. Through the skills of a specialist in data governance and using the techniques of facilitation, the senior business team develops the enterprise’s vision for data and its governance. All of the subsequent activities of any data governance effort should be formed by this vision. Visioning offers the widest possible participation for developing a long-range plan, especially in enterprise-oriented areas such as data governance. It is democratic in its search for disparate opinions from all stakeholders and directly involves a cross-section of constituents from the enterprise. Developing a vision helps avoid piecemeal and reactionary approaches to addressing problems. It accounts for the relationship between issues, and how one problem’s solution may generate other problems or have an impact on another area of the enterprise. Developing a vision at the enterprise level allows the organization to create a holistic approach to setting goals that will enable the it to realize the vision.


Google Announces a New, More Services-Based Architecture Called Runner V2 to Dataflow

Runner V2 has a more efficient and portable worker architecture rewritten in C++, which is based on Apache Beam's new portability framework. Moreover, Google packaged this framework together with Dataflow Shuffle for batch jobs and Streaming Engine for streaming jobs, allowing them to provide a standard feature set from now on across all language-specific SDKs, as well as share bug fixes and performance improvements. The critical component in the architecture is the worker Virtual Machines (VM), which run the entire pipeline and have access to the various SDKs.... If features or transforms are missing for a given language, they must be duplicated across various SDKs to ensure parity; otherwise, there will be gaps in feature coverage and newer SDKs like Apache Beam Go SDK will support fewer features and exhibit inferior performance characteristics for some scenarios. Currently, Dataflow Runner v2 is available with Python streaming pipelines and Google recommends developers to test the new Runner out with current non-production workloads before enabling it by default on all new pipelines.


DOJ Seeks to Recover Stolen Cryptocurrency

The cryptocurrency stolen from the two exchanges was later traded for other types of virtual currency, such as bitcoin and tether, to launder the funds and obscure its transaction path, the Justice Department says. The civil lawsuit relates to a criminal case that the Justice Department brought against two Chinese nationals for their alleged role in laundering $100 million in cryptocurrency stolen from exchanges by North Korean hackers in 2018. The two suspects, Tian Yinyin, and Li Jiadong, are each charged with money laundering conspiracy and operating an unlicensed money transmitting business. The two also face sanctions from the U.S. Treasury Department. U.S. law enforcement officials and intelligence agencies, including the Cybersecurity and Infrastructure Security Agency, believe these types of crypto heists are carried out by the Lazarus Group, a hacking group collective also known as Hidden Cobra. Earlier this week, CISA, the FBI and the U.S. Cyber Command warned of an uptick in bank heists and cryptocurrency thefts since February by a subgroup of the Lazarus Group called BeagleBoyz


The increasing importance of data management

The goal of data management is to facilitate a holistic view of data and enable users to access and derive optimal value from it—both data in motion and at rest. Along with other data management solutions, DataOps leads to measurably better business outcomes: boosted customer loyalty, revenue, profit, and other benefits. The trouble with achieving these goals lies in part in businesses not understanding how to translate the information they hold into actionable outcomes. Once a business has toiled all the information it holds to unearth valuable insights, they can then enact changes or implement efficiencies to yield returns. ... Data security is consistently rated among the highest concerns and priorities of IT management and business leaders alike. But we can’t say that technology is always the answer in ensuring that data is securely and safely stored. A key challenge is getting alignment across organizations on the classification of data by risk and on how data should be stored and protected. That makes security a human issue; the tech is often easy. Two thirds of survey respondents report insufficient data security, making data security an essential element of any discussion of efficient data management.


What Companies are Disclosing About Cybersecurity Risk and Oversight

More boards are assigning cybersecurity oversight responsibilities to a committee. Eighty-seven percent of companies this year have charged at least one board-level committee with cybersecurity oversight, up from 82% last year and 74% in 2018. Audit committees remain the primary choice for those responsibilities. This year 67% of boards assigned cybersecurity oversight to the audit committee, up from 62% in 2019 and 59% in 2018. Last year we observed a significant increase in boards assigning cybersecurity oversight to non-audit committees, most often risk or technology committees, (28% in 2019 up from 20% in 2018), but that percentage dropped this year (26% in 2020). A minority of boards, 7% overall, assigned cyber responsibilities to both the audit and a non-audit committee. Among the boards assigning cybersecurity oversight responsibilities to the audit committee, nearly two-thirds (65%) formalize those responsibilities in the audit committee charter. Among the boards assigning such responsibilities to non-audit committees, most (85%) include those responsibilities in the charter.
Identification of director skills and expertise



Quote for the day:

"For true success ask yourself these four questions: Why? Why not? Why not me? Why not now?" -- James Allen

Daily Tech Digest - August 29, 2020

Banks aren’t as stupid as enterprise AI and fintech entrepreneurs think

First, banks have something most technologists don’t have enough of: Banks have domain expertise. Technologists tend to discount the exchange value of domain knowledge. And that’s a mistake. So much abstract technology, without critical discussion, deep product management alignment and crisp, clear and business-usefulness, makes too much technology abstract from the material value it seeks to create. Second, banks are not reluctant to buy because they don’t value enterprise artificial intelligence and other fintech. They’re reluctant because they value it too much. They know enterprise AI gives a competitive edge, so why should they get it from the same platform everyone else is attached to, drawing from the same data lake? Competitiveness, differentiation, alpha, risk transparency and operational productivity will be defined by how highly productive, high-performance cognitive tools are deployed at scale in the incredibly near future. The combination of NLP, ML, AI and cloud will accelerate competitive ideation in order of magnitude. The question is, how do you own the key elements of competitiveness? It’s a tough question for many enterprises to answer.


Artificial Intelligence (AI) strategy: 3 tips for crafting yours

AI can drive value only if it is applied to a well-defined business problem, and you’ll only know if you’ve hit the mark if you precisely define what success looks like. Depending on the business objective, AI will commonly target profitability, customer experience, or efficiency. Automation from AI can yield cost savings or costs that are redirected to other uses. ... Treat your data as a treasured asset. While data quality and merging disparate data sources are common challenges, one of the biggest challenges in data integration initiatives is streamlining, if not automating, the process of turning data into actionable insights. ... If you are looking to develop AI capabilities in-house, keep in mind that AI teams can benefit from having a balance of skillsets. For example, deep expertise in modeling is critical for thorough research and solution development. Data engineering skills are essential in order to execute the solution. Your AI teams also need leaders who understand the technology, at least enough to know what is and is not possible. In running an AI team, it is important to create an environment that fosters creativity but provides structure. Keep the AI team connected to business leaders in the organization to ensure that AI is being applied to high-priority, high-value use cases that are properly framed.


How special relativity can help AI predict the future

Researchers have tried various ways to help computers predict what might happen next. Existing approaches train a machine-learning model frame by frame to spot patterns in sequences of actions. Show the AI a few frames of a train pulling out of a station and then ask it to generate the next few frames in the sequence, for example. AIs can do a good job of predicting a few frames into the future, but the accuracy falls off sharply after five or 10 frames, says Athanasios Vlontzos at Imperial College London. Because the AI uses preceding frames to generate the next one in the sequence, small mistakes made early on—a few glitchy pixels, say—get compounded into larger errors as the sequence progresses. Vlontzos and his colleagues wanted to try a different approach. Instead of getting an AI to learn to predict a specific sequence of future frames by watching millions of video clips, they allowed it to generate a whole range of frames that were roughly similar to the preceding ones and then pick those that were most likely to come next. The AI can make guesses about the future without having to learn anything about the progression of time, says Vlontzos.


TypeScript's co-creator speaks out on TypeScript 4.0

TypeScript was one of several efforts inside and outside Microsoft in those few years to try and tackle this need -- first for large companies like Microsoft and Google, but ultimately for the broader industry that was all moving in the same direction. Other options, like Google Dart, tried to replace JavaScript, but this proved to present too large a compatibility gap with the web as it was and is. TypeScript, by being a superset of JavaScript, was compatible with the real web, and yet also provided the tooling and scalability that were needed for the large and complex web applications of the early 2010s. Today, that scale and complexity is now commonplace, and is the standard of any SaaS company or internal enterprise LOB [line of business] application. And TypeScript plays the same role today, just for a much larger segment of the market. ... TypeScript's biggest contribution has been in bringing amazing developer tools and IDE experiences to the JavaScript ecosystem. By bringing types to JavaScript, so many error-checking, IDE tooling, API documentation and other developer productivity benefits light up. It's the experience with these developer productivity benefits that has driven hundreds of thousands of developers to use TypeScript.


Enabling transformation: How can security teams shift their perception?

There are clear opportunities to deliver this transformation through the adoption of a unified security approach. By this, we mean the integration, rationalisation and centralisation of security environments into a holistic ecosystem. Adopting such an approach can help improve the operator experience and make things simpler for the teams charged with maintenance – while also providing a cure to the headaches caused by platform proliferation. Not only this, but a unified security approach is a key enabler in helping security leaders engage at the board level by delivering cost transformation. An integrated security environment will serve to streamline operations for security teams, allowing staff to focus on higher value tasks while automating repetitive processes. In business terms, this means clawing back up to 155 days’ worth of effort for the average UK security team. Clearly, cost reduction and operational efficiencies are central to demonstrating business impact, but they should be viewed as a starting point rather than a security teams’ entire value proposition.


Deep Learning Models for Multi-Output Regression

Neural network models also support multi-output regression and have the benefit of learning a continuous function that can model a more graceful relationship between changes in input and output. Multi-output regression can be supported directly by neural networks simply by specifying the number of target variables there are in the problem as the number of nodes in the output layer. For example, a task that has three output variables will require a neural network output layer with three nodes in the output layer, each with the linear (default) activation function. We can demonstrate this using the Keras deep learning library. We will define a multilayer perceptron (MLP) model for the multi-output regression task defined in the previous section. Each sample has 10 inputs and three outputs, therefore, the network requires an input layer that expects 10 inputs specified via the “input_dim” argument in the first hidden layer and three nodes in the output layer. We will use the popular ReLU activation function in the hidden layer. The hidden layer has 20 nodes, which were chosen after some trial and error. We will fit the model using mean absolute error (MAE) loss and the Adam version of stochastic gradient descent.


Machine learning wards off threats at TV studio Bunim Murray

While its name is probably little-known to most viewers, Bunim Murray is kind of a big deal in TV. Founded in the late 1980s when two TV producers were flung together to produce a so-called ‘unscripted soap opera’ for the MTV network, the resulting show, The Real World, was instrumental in establishing the reality TV genre. The new company went on to develop global hits including Keeping Up With The Kardashians, Project Runway and The Simple Life. Bunim Murray’s CTO Gabe Cortina arrived at the firm with the infamous 2014 hack on Sony Pictures weighing on his mind. This incident centred on the release of The Interview, a comedy starring Seth Rogen and James Franco which depicted the fictionalised assassination of North Korean dictator Kim Jong-Un. Likely perpetrated by groups with links to the North Korean state, the large-scale leak of data from the studio caused great embarrassment for many high-profile individuals. From the get-go, Cortina understood that a similar kind of breach could be seriously damaging to Bunim Murray. “We’ve been in business for 30 years. We have a strong brand and we’re known for delivering high-quality shows,” he tells Computer Weekly.


Security Concerns for Peripheral APIs on the Web

To ensure a relatively secure browsing experience, browsers sandbox websites - providing only limited access to the rest of the computer and even other websites that are open on different tabs/windows. What differentiates Web Bluetooth/USB APIs to other Web APIs such as the MediaStream or Geolocation that received wide adaptation from all browser vendors is the specificity which they offer. When a user enters a website that uses the Geolocation API, the browser shows a pop-up requesting permission to access the current position. While approving this request can pose a security risk, the user makes a conscious decision to provide his or her location to the website. At the same time, the browser exposes a set of specific API calls (such as getCurrentPosition) that does exactly that. On the other hand, Bluetooth and USB communication work on a lower level, making it difficult to discern which actions are being performed by the website. For example, Web Bluetooth communicating with a device happens using the writeValue that accepts arbitrary data and can cause any number of actions on the target device.



Regulated Blockchain: A New Dawn in Technological Advancement

What a regulated blockchain portends is that the impact the negative statements from government officials and the media along with regulatory uncertainties have been having on entrepreneurs, investors, the market, and the industry at large, will be a thing of the past. One area where we have started seeing the positive impact and transformation in technology is the case of the digital currency. The internet was the precursor of cashless policy and internet banking all of which greatly reduced the stress people had to go through to conduct businesses. The Chinese Government vehemently opposed cryptocurrency because it was decentralized but it’s of great relief to see that the People Bank of China (PBOC) is at the forefront of legitimizing digital currency. As a part of a pilot program, PBOC introduced a homegrown digital currency across four cities, this is a huge leap towards actualizing the first electronic payment system by a major central bank. The Bank of England (BoE) is also toeing the footsteps of China but at a review stage as of July 2020. Andrew Bailey, the Governor of BoE was reported to have said, “I think in a few years, we will be heading toward some sort of digital currency.”


It’s never the data breach -- it’s always the cover-up

This is a warning to CSOs and CISOs: Remove all sense of impropriety in IR. Concealing a data breach is illegal. Every decision made during an incident might be used in litigation and will be scrutinized by investigators. In this case, it's also led to criminal charges filed against a well-known security leader. If your actions seem to conceal rather than investigate and resolve a data breach, expect consequences. Neither the ransom nor the bug bounty are at issue here. Paying the ransom through the bug bounty was alleged to help conceal the breach. Firms should develop a digital extortion policy, so that there are no allegations of impropriety should they choose to pay a ransom. In addition, the guidelines of your bug bounty program should not be altered on the fly to facilitate non-bug bounty program activities.  Work closely and openly with senior leadership on breaches and issues of ransom. Sullivan tried to get the hackers to sign non-disclosure agreements -- a legal document between two legitimate entities effectively acknowledging the hackers as business entities -- which allowed Uber to treat the hackers as third parties. Treating the ransom as a "cost of doing business" helped them conceal the payment from the management team as well.



Quote for the day:

"What I've really learned over time is that optimism is a very, very important part of leadership." -- Bob Iger