Daily Tech Digest - February 03, 2019

Serverless computing’s dark side: less portability for your apps

Serverless computing’s dark side: less portability for your apps
How that serverless development platforms calls into your serverless code can vary, and there is not uniformity between public clouds. Most developers who develop applications on serverless cloud-based systems couple their code tightly to a public cloud provider’s native APIs. That can make it hard, or unviable, to move the code to another platforms. The long and short of this that if you build an application on a cloud-native serverless system, it’s both difficult to move to another cloud provider, or back to on-premises. I don’t mean\ to pick on serverless systems; they are very handy. However, more and more I’m seeing enterprises that demand portability when picking cloud providers and application development and deployment platforms often opt for what’s fastest, cheapest, and easiest. Portability be dammed. Of course, containers are also growing by leaps and bounds, and one of the advantages of containers is portability. However, they take extra work, and they need to be built with a container architecture in mind to be effective.


Grady Booch on the Future of AI

To put things in perspective, there have been many springs and winters in the development of artificial intelligence. The first winter was in the 1950s during the height of the Cold War. There was a great deal of interest in machine translation in order to translate Russian into some other language. According to an often quoted story, they put in statements such as "The spirit is willing, but the flesh is weak". Translated into Russian and back, the result was "The vodka is strong, but the meat is rotten." Language learning was a lot harder than people first thought. The next spring arose with the ideas of Newell and logic theorist Terry Winograd that used the idea of manipulating small world blocks, which led to some progress. Of course that was the time when Marvin Minsky stated that there will be human level intelligence in three years. No one makes those kinds of claims any more. Computational power and expressiveness were the limits to this approach.


Blockchain and biometrics: The patient ID of the future?

iris.jpg
This isn't the first time blockchain has paired with biometrics for identification purposes. Starting back in 2017, Microsoft and Accenture joined to create a blockchain solution that used biometric data to act as digital identification for refugees. Pharmaceuticals have also considered utilizing blockchain to improve track-and-trace serialization. IrisGuard's technology has previously been used by the United Nation Agencies to prevent human trafficking, providing refugees with iris-based registration and e-payment solutions through the High Commissioner for Refugees (UNHCR) and the World Food Programme (WFP), the release said. "Patient identification is a growing problem in today's healthcare system," Chrissa McFarlane, CEO and founder of Patientory, Inc., said in the release. "This technology can help providers identify an individual with unparalleled accuracy, through iris-recognition and data matching. And because it's verified on the blockchain, it's scalable without sacrificing data security—which is one of the main problems with our current healthcare-data infrastructure."


State Machine Design in C

A common design technique in the repertoire of most programmers is the venerable finite state machine (FSM). Designers use this programming construct to break complex problems into manageable states and state transitions. There are innumerable ways to implement a state machine. A switch statement provides one of the easiest to implement and most common version of a state machine. Any transition is allowed at any time, which is not particularly desirable. For most designs, only a few transition patterns are valid. Ideally, the software design should enforce these predefined state sequences and prevent the unwanted transitions. Another problem arises when trying to send data to a specific state. Since the entire state machine is located within a single function, sending additional data to any given state proves difficult. And lastly these designs are rarely suitable for use in a multithreaded system. The designer must ensure the state machine is called from a single thread of control.


Privacy: Several States Consider New Laws

Privacy: Several States Consider New Laws
"Each of the 50 states now has its own breach notification laws, with nearly one-half adopting data security and/or data disposal requirements to protect consumers' personally identifiable information from unauthorized disclosure," says privacy attorney David Holtzman, vice president of compliance at security consultancy CynergisTek. "While most states are not taking a sectorial approach to the type of PII that must be protected, New York, Ohio and South Carolina have adopted cybersecurity requirements that target industries that include health plans and insurers," he adds. "A theme seen in state legislation to update breach notification laws in recent years is to set shorter notification periods. Some argue that this would give consumers more time to take action to protect themselves against the threat of financial fraud or identity theft by notifying major credit reporting agencies." Privacy attorney Kirk Nahra of the law firm Wiley Rein notes: "The states continue to examine the possibilities for increasing privacy and data security protections, both in currently regulated areas and in situations where federal law is not directly applicable through a specific law or regulation."


The 3 Secret Types of Technical Debt

Unfortunately, the cost of repaying debt is much higher by that point, just because of the compound interest you have to pay back that was consolidated into the debt. In other words, 2 hours invested in repaying technical debt 6 months ago, could be equivalent to 1 day of work today to repay the same amount of debt. The problem with this type of approach is it feels you are going fast to start with because you are delivering features and the technical debt is not hurting you as much at the very beginning. The problem is you are putting yourself on the compound interest curve, instead of staying linear. Linear and compound curves look similar at the start, very different later on. In most cases, you want to avoid ending up in this category. An example of where this type of debt is acceptable is when you need to hit a regulatory deadline, where the cost of not hitting the deadline outweighs the cost of repaying the compound debt accumulated later on.


Decision Trees — An Intuitive Introduction

Regression works similar to classification in decision trees, we choose the values to partition our data set but instead of assigning class to a particular region or a partitioned area, we return the average of all the data points in that region. The average value minimizes the prediction error in a decision tree. An example would make it clearer. Predicting rainfall for a particular season is a regression problem since rainfall is a continuous quantity. Given rainfall stats like in the figure below how can a decision tree predict rainfall value for a specific season? ... But being a supervised learning algorithm how does it learn to do so; in other words how do we build a decision tree? Who tells the tree to pick a particular attribute first and then another attribute and then yet another? How does the decision tree know when to stop branching further? Just like how we train a neural network before using it for making predictions we have to train (build) a decision tree before prediction.


Before AI is a human right, shouldn't we make it work first?

istock-675938062.jpg
Benioff warned that AI-powered countries and companies will be will be "smarter," "healthier," and "richer," while those less generously endowed with AI will be "weaker and poorer, less educated and sicker." I guess he hasn't seen the AI that currently powers the Western world—you know, like IBM's Watson, which one of its engineers characterized as "like having great shoes but not knowing how to walk." Not that IBM is alone—take a walk through the transcripts of public companies' reporting earnings, and you'll see artificial intelligence mentions on a precipitous rise. Look around the real world, however, and finding true artificial intelligence is an exercise in futility. Even the companies packed with PhDs like Google seem to only be able to muster advertising that feels like weak pattern matching. It's one thing to insist that companies like, say, Google, give free access to its algorithms, but quite another to figure out how to do that in practice.


Overcoming RESTlessness

Broad as it was, the idea of using the Web for network-based sharing of data and services beyond the browser was a popular one. Software developers quickly seized on Fielding's work and put it into practice.3 The rise of REST was itself fuelled by a false dichotomy, with SOAP playing the role of bogeyman. Whereas SOAP attempted to provide a method of tunneling through the protocols of the web, the REST approach embraced them. This notion of REST being "of the web, not just on the web" made it a more intuitive choice for software engineers already building web-based solutions. As the SOAP and WS-* ecosystem became more complicated, the relative simplicity and usability of REST won out. Over time, JSON replaced XML as the de facto data format for web APIs for similar reasons. As the usage of the web computing paradigm expanded to new scenarios -- enterprise application integration, cloud provisioning, data warehouse querying, IoT -- so did the adoption of REST APIs.


Scrum Guide Decomposition, Part 2

In the enterprise, it would be difficult (but not impossible) to have a team with all competencies to do all the work simply because teams are siloed into specific competencies. For example, DBA’s, Middleware, specific back-end systems like SAP, and so forth. The enterprise's unwillingness to break apart these silos may hinder them from fully getting the benefits of Scrum. By having team members that are cross-functional, but not necessarily proficient in all competencies, you can avoid delays when someone, for example, is sick or on leave. Someone can continue the work. The team can also share the workload. No single person is carrying the team because they are the only person who knows that competency. The term “Jack of all trades – master of none” comes to mind. Good luck finding people who know everything. It is the team as a whole who becomes the masters. Not individuals. The Scrum Team has proven itself to be increasingly effective for all the earlier stated users, and any complex work.



Quote for the day:


"Dont be afraid to stand for what you believe in, even if that means standing alone." -- Unknown


No comments:

Post a Comment