Daily Tech Digest - December 04, 2019

10 bad programming habits we secretly love

9 bad programming habits we secretly love
For the last decade or so, the functional paradigm has been ascending. The acolytes for building your program out of nested function calls love to cite studies showing how the code is safer and more bug-free than the older style of variables and loops, all strung together in whatever way makes the programmer happy. The devotees speak with the zeal of true believers, chastising non-functional approaches in code reviews and pull requests. They may even be right about the advantages. But sometimes you just need to get out a roll of duct tape. Wonderfully engineered and gracefully planned code takes time, not just to imagine but also to construct and later to navigate. All of those layers add complexity, and complexity is expensive. Developers of beautiful functional code need to plan ahead and ensure that all data is passed along proper pathways. Sometimes it’s just easier to reach out and change a variable. Maybe put in a comment to explain it. Even adding a long, groveling apology to future generations in the comment is faster than re-architecting the entire system to do it the right way.



New AI.jpg
Advancements in explainable AI will continue in 2020 and beyond as new standards are developed around the technical definition of explainability, slowly followed by new technologies to address the explainability problem for business leaders non-technical audiences. In real estate, for example, offering a compelling explanation for why a mortgage application was rejected by an AI-driven platform will eventually be a necessity as AI adoption continues. Although we’ll see evolving technical tools and standards, progress for layperson tools will be slower with some narrow and domain-specific solutions (e.g., non-technical explainability for finance) emerging first. Like the general public’s understanding of ‘the web’ in the 90s, awareness, understanding and trust in AI will gradually increase as the capabilities and use of the technology spreads. Using sophisticated tooling to automate what we would call human creativity is now commonly referred to as AI. However, the term has become almost meaningless as “AI” as now covers everything from predictive analytics to Amazon Echo speakers. The industry needs to get their arms around real AI.

Volkswagen Is Accelerating One Of The World’s Biggest Smart-Factory Projects

VW factory
The biggest challenge, says Jean-Pierre Petit, Capgemini’s director of digital manufacturing, in an emailed comment to Forbes, is to “cross the chasm” from an initial pilot in a single plant to full-scale deployments, which is where the real benefits of digitization kick in. In particular, smart-factory projects require IT teams to work closely with “operational technology” (OT) groups managing machinery and other tech inside factories. Often, OT teams have become used to working quite independently and may resist IT’s efforts to drive change. By working closely together on VW’s industrial cloud project, Hofmann and Walker are sending a strong signal to their respective teams about the need for tight collaboration. The decision to launch pilots at several factories this year rather than just one was also deliberate. “You can put a ton of slides up [about the industrial cloud], but nobody is interested in that,” says Dirk Didascalou, one of the senior AWS executives involved in the project. “They need to see it working first.”



The question that helps businesses overcome unconscious bias

In the workplace, when you’re considering someone for a project or a promotion, turn that mantra into a question: What do I know about this person? You may have a feeling that this person is someone you do or don’t like or connect with, or a sense that this person “is ready for” and “deserves” the opportunity. Guided by that sense, you can easily pick and choose facts from their experience and work records to reinforce your decision. But when you start only with facts, a different picture can emerge. So drill down exclusively on what’s concrete. What projects did this person take part in or help lead, and how successful were they? What do the 360-degree assessments of this person show? What demonstrable impact did this person’s work have on sales, revenues, morale? Sometimes, the facts will back up a general sense that you have, or a description that someone else gave you. 


Programmers and developer teams are coding and developing software
It's almost a cliché to point out how so much of software today is built on or with open source. But Ian Massingham recently reminded me that for all the attention we lavish on back-end technologies--Linux, Docker containers, Kubernetes, etc.--front-end open source technologies actually claim more developer attention.  Much of the front-end magic open source software that developers love today was born at early web giants like Google and Facebook. Frameworks for the front make it possible for Facebook, Google, LinkedIn, Pinterest, Airbnb, and others to iterate quickly, scale, deliver consistent fast responsiveness and, in general, mostly delight their users. Indeed, their entire businesses depend on great user experiences. While venture investors historically have plowed their funds into back-end startups creating open source software, the same is not nearly as true with the front-end. Accel, Benchmark, Greylock, and other top-tier VCs made fortunes on backing enterprise open source software startups like Heroku, MuleSoft, Red Hat, and many more.


Migrating to GraphQL at Airbnb

Two GraphQL features Airbnb relied upon during this early stage were aliasing and adapters. Aliasing allowed mapping between camel-case properties returned from GraphQL and snake-case properties of the old REST endpoint. Adapters were used to convert a GraphQL response so that it could be recursively diffed with a REST response, and ensure GraphQL was returning the same data as before. These adapters would later be removed, but they were critical for meeting the parity goals of the first stage. Stage two focuses on propagating types throughout the code, which increases confidence during later stages. At this point, no runtime behavior should be affected. The third stage improves the use of Apollo. Earlier stages directly used the Apollo Client, which fired Redux Actions, and components used the Redux store. Refactoring the app using React Hooks allows use of the Apollo cache instead of the Redux store.  A major benefit of GraphQL is reducing over-fetching.


ASP.NET Core Microservices: Getting Started

Open avocado
Let's consider that we're exploring microservices architecture, and we want to take advantage of polyglot persistence to use a NoSQL database (Couchbase) for a particular use case. For our project, we're going to look at a Database per service pattern, and use Docker (docker-compose) to manage the database for the ASP.NET Core Microservices proof of concept. This blog post will be using Couchbase Server, but you can apply the basics here to the other databases in your microservices architecture as well. I'm using ASP.NET Core because it's a cross-platform, open-source framework. Additionally, Visual Studio (while not required) will give us a few helpful tools for working with Docker and docker-compose. But again, you can apply the basics here to any web framework or programming language of your choice. I'll be using Visual Studio for this blog post, but you can achieve the same effect (with perhaps a little more work) in Visual Studio Code or plain old command line.


Amazon Just Joined The Race To Dominate Quantum Computing In The Cloud

People pass by AWS (Amazon Web Services) stand during the...
AWS is something of a latecomer to the quantum cloud. IBM kicked off the trend several years ago, and since then a wave of other companies have unveiled cloud-based offerings, including Amazon’s partners D-Wave and Rigetti. Nor is AWS the first cloud provider to offer access to a range of other companies’ quantum hardware: Microsoft took that honor when it launched its Azure Quantum cloud offering last month. Yet AWS is likely to become a force to be reckoned with in the field because of a unique advantage it has over its rivals. ... AWS became a cloud powerhouse because many of the services it now offers were initially developed for Amazon’s vast commercial empire. The same scenario could well play out with quantum computing. For instance, one of the things quantum machines are particularly good at is optimizing delivery routes. AWS could—quite literally—road test a quantum-powered service that lets Amazon plot the most efficient directions for its delivery vehicles to take as they drop off parcels. The machines could also help Amazon optimize the way goods flow through its vast warehouse network.


Simplifying data management in the cloud

Simplifying data management in the cloud
Attempting to leverage the approaches and tools we use today will add complexity until the systems eventually collapse from the weight of it. Just think of the number of tools in your data center today that cause you to ask “what were they thinking?” Indeed, they were thinking much the same way we’re thinking today, including looking for tactical solutions that will eventually not provide the value they once did—and in some cases providing negative value.  I’ve come a long way to make a pitch to you, but as I’m thinking about how we solve this issue, an approach seems to pop up over and over as the best likely solution. Indeed, it’s been kicked around in different academic circles. It’s the notion of self-identifying data. I’ll likely hit this topic again at some point, but here’s the idea: Take the autonomous data concept a few steps further by embedding more intelligence with the data and more knowledge about the data itself. We would gain the ability to have all knowledge around the use of the data available by the data itself, no matter where it’s stored, or where the information is requested.


Survey: IT pros see career potential in as-a-Service trend

IT pros over 55 are most concerned with data complexity slowing down future data migrations. One question in the survey suggests that instead of tearing down data silos, cloud migration projects may create new ones. Seventy-seven percent of respondents saythat data is siloed between public and private clouds. Miller said to avoid this organizations need to choose the aaS model that makes the most business and policy sense. "Companies need to adopt a model that is not tied to one cloud or one premise but has the flexibility to move data and applications to where business needs are best met," he said. "If you adopt the right aaS model, you're breaking down the silos and driving overall efficiencies." While the majority of companies state that they have implemented at least some aaS projects, 66% of respondents say that IT pros avoid this new way of working out of fear of losing their jobs. The younger respondents (ages 22 to34) were most likely to think this at 70%, compared to 67% of 35 to 54yearolds and only 45% of 55+ year-olds.



Quote for the day:


"Leadership development is a lifetime journey, not a quick trip." -- John Maxwell


No comments:

Post a Comment