Daily Tech Digest - January 21, 2020

How low-code helps CIOs accelerate digital transformation

How low-code helps CIOs accelerate digital transformation image
As digital transformation has become the main agenda, CIOs are using technology strategically and leveraging digital opportunities. The fact that in 2019, 40% of technology spending (more than $2 trillion) is estimated to have been assigned to digital transformation initiatives, adoption of emerging technology has become the biggest objective for enterprises. The app economy plays a crucial in driving digital transformation and business innovation. CIOs have to consider the people, platforms, and processes that will cater to the increasing demand for modern applications. The increasing demand for enterprise applications has led to the increasing adoption of low-code platforms in the Application Development & Delivery (AD&D) market. Enterprises are working towards leveraging agile practices and incorporating development techniques to create a minimum viable product (MVP). CIOs and IT leaders have to determine what practices, what type of technology and the skills required to achieve modernisation.



.NET Core: Writing Really Obvious Code with Enumerated Values in gRPC Web Services


gRPC services support using enumerated values (enums) when creating the .proto file that drives your gPRC service and the clients that access it (for more on how that works, see the column I wrote on creating gRPC services and clients). Since the definitions of the messages that you send to and receive from a gRPC service are converted into C# classes, defining enums in your .proto file gives you the same ROCing benefits that defining enums in your code does. ... If you prefer Pascal-cased names in your code then you'll need to deploy underscores strategically. To get CreditLimit as the name of your enumerated value, you'll need to name the field using an underscore before "limit" in your .proto file (e.g. Credit_Limit, CREDIT_LIMIT, or credit_limit would do the trick). One last note on the default value: A client can't tell the difference between a property that's been set to the default value for your enum and a property that hasn't been set at all. A best practice, therefore, would be to make the default value for your enum (the one in position 0) to be the "no value available" option and never use it. That way a client can tell when the property hasn't been set.


Solving the Big Data, Small Returns Problem in 2020


All technicians are humans, but not all humans are technicians. If we are going to build a new world, we should build it from the ground up and from a human-centric angle. We need to flip the model of thinking from data and tech-first to use case first. To make this possible, we finally need to get around to answering three basic, yet wildly complicated questions: What data do we have? Where is it? And how do we get value from it? We’ve learned that having more data doesn’t equate to having better insights. So we need to collect data specific to our questions. We still have to work with legacy architectures and infrastructures that have been cobbled together over time, and data in various forms from different sources that were never designed to work together in harmony. So we need to be meticulous about where we keep our data and how we organize it, so that it’s visible and accessible. And as far as getting value from data, we need to put the human element back into analytics. The analytics will only be as good as the person that asks challenging questions. That person should not have to have a technical background to do so.


Why The Digital Economy Is Set For A Correction

uber eats
There’s certainly an expectation that one of the consequences of Digital is that things just get cheaper and consequently, we can consume more of them. But how do these things become cheaper and what are the consequences of falling real prices? Two things, above all else, have contributed to the decline in real prices for the digital-meets-physical category: taxation and algorithmically driven labour efficiency. Tax minimisation is facilitated by an international tax regime dating back to the 1920s, when it was reasonable to tax corporations based on physical presence. This doesn’t really work in a world of transfer pricing, where rents can be extracted from subsidiaries in high-tax locations for the use of corporate intangible assets such as brands, patents, and software, thus minimising profits. Taxation will, eventually, get sorted. France and, surprisingly, the UK seem to be leading the way on this. The decline of unit labour inputs is another matter. If we think about the design of, for example, a work management system used in a warehouse, its major purpose is to avoid employee downtime.


The Move to Multiple Public Clouds Creates Security Silos


Often when organizations migrate from on-premise to public cloud environments, security teams want to continue to use the same approach for protecting applications and data. But use of a public cloud, especially multiple public clouds, introduces new attack vectors that require better visibility into what is happening across the entire ecosystem. Security tools offered by public cloud vendors are often a popular choice to fill the gap following migration. The majority of respondents who said that their organizations used public cloud environments indicate that they selected native security tools or a combination of native tools with third-party solutions to secure their public cloud. Possible reasons for organizations adopting a heterogeneous approach to securing public clouds might be because public cloud vendors are not cybersecurity experts and typically provide best-of-breed security tools vs. a 360-degree holistic security solution.


Can an AI be an inventor? Not yet.

AI inventor
For Abbott, the fact that we are not at the point where machines are routinely inventors is part of the point: society, he argues, needs to figure this out early. He acknowledges that AI doesn’t just spring into existence—it must be coded and trained and fed data—but that doesn’t necessarily mean everything an AI creates can or should be traced back to humans. Hundreds or thousands of people might be involved in programming IBM’s supercomputer Watson with general problem-solving capabilities, but “if Watson then applies those capabilities and solves a particular problem in a way that results in a patent, it’s not clear that anything any of those people have done qualifies them to be an inventor,” Abbott says. But if humans can’t be listed as inventors because they weren’t intimately involved, and the AI can’t be listed as an inventor either, then the invention may not be patentable at all. This, Abbott suggests, could be problematic. It could prevent companies from investing money in AI technologies and prevent breakthroughs in important areas like drug discovery.


Why employees can pose the biggest cloud migration challenge image
While IT departments can guarantee corporate technology is working as it should, they can’t always control the people using it or what devices they may wish to use. So, steps need to be taken to ensure that whatever the device used by employees, they do not become easy pickings for the cybercriminals who pose a threat to the corporate network. The first step is to educate the workforce on those threats. With people being asked for multiple passwords when accessing online accounts these days, it’s common for employees to choose something that’s easy to remember. But easy to remember also means easy to guess. It’s common to hear of hackers successfully cracking passwords by using personal information they have siphoned from social media – whether that’s your favourite football team or the names of your children. It’s advisable for IT departments to work with HR to alert employees to the dangers of weak passwords – along with other cyber-attack techniques, such as phishing on email.


Google CEO Sundar Pichai: This is why AI must be regulated

Microsoft's recent calls for government regulation have focused on the use of facial-recognition technology in public spaces, arguing that if left unchecked it will increase the risk of biased decisions and outcomes for groups of people already discriminated against. The timing of Pichai's post is unlikely to be a coincidence. Euractiv reporters last week published a leaked European Commission proposal touting a three- to five-year ban on facial-recognition technology by public and private-sector organizations in public spaces until regulators can develop solid methods for assessing the risks of the technology and risk-management approaches. "This would safeguard the rights of individuals, in particular against any possible abuse of the technology. It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes (subject to a decision issued by a relevant court)," the Commission wrote.



"The short answer is that Rust solves pain points present in many other languages, providing a solid step forward with a limited number of downsides," explains Jake Goulding on Stack Overflow's blog. Goulding is the co-founder of Rust consultancy Integer 32, so he has a vested interest in Rust's success, but he's also not alone in taking a shine to the young language. Microsoft is experimenting with Rust to reduce memory-related bugs in Windows components. Every single bug costs Microsoft on average $150,000 to patch and in 2018 there were 468 memory issues it needed to resolve. Over the past decade, more than 70% of the security patches it has shipped addressed memory-related bugs. Rust concepts are also being used in Microsoft's recently open-sourced Project Verona, an experimental language for safe infrastructure programming that could help Microsoft securely retain legacy C and C# code.  Mozilla Research describes Rust as a "systems programming language that focuses on speed, memory safety, and parallelism". It's often seen as an alternative to systems programming languages like C and C++ that developers use to create game engines, operating systems, file systems, browser components, and VR simulation engines.


5 IT Operations Cost Traps and How to Avoid Them

On a first look, centralization contradicts the spirit of DevOps and Agile. Agile teams want to be self-sufficient. They want to have all needed skills on their team so they don’t depend on external, centralized help to deliver their sprints. While such self-sufficiency is a guiding principle, DevOps teams always rely on some centralized teams. Hopefully, no DevOps team considers building their own data centers or trying to manage the OS level with all virus scanning and patch management by themselves. So, the real questions are — what must be sourced to a centralized team for cost, compliance, or other reasons? In which areas are project or product teams free to choose to do the work themselves, even if there is a centralized team for this topic? Figure 2 below illustrates this ecosystem of standard services. Ultimately, every company and IT organization has to ensure that teams, Agile or not, perform activities and make decisions in line with overall company goals and the CIO’s strategy for IT. They define the boundaries within which all Agile or non-Agile and DevOps or old-fashioned development and operations teams act.



Quote for the day:


"Leave every person you interact with feeling better about themselves; feeling loved & appreciated."  --Wright Thurston


No comments:

Post a Comment