Pretexting is, by and large, illegal in the United States. For financial institutions covered by the Gramm-Leach-Bliley Act of 1999 (GLBA) — which is to say just about all financial institutions — it's illegal for any person to obtain or attempt to obtain, to attempt to disclose or cause to disclose, customer information of a financial institution by false pretenses or deception. GLBA-regulated institutions are also required to put standards in place to educate their own staff to recognize pretexting attempts. One thing the HP scandal revealed, however, was that it wasn't clear if it was illegal to use pretexting to gain non-financial information — remember, HP was going after their directors' phone records, not their money. Prosecutors had to pick and choose among laws to file charges under, some of which weren't tailored with this kind of scenario in mind. In the wake of the scandal, Congress quickly passed the Telephone Records and Privacy Protection Act of 2006, which extended protection to records held by telecom companies. One of the best ways to prevent pretexting is to simply be aware that it's a possibility, and that techniques like email or phone spoofing can make it unclear who's reaching out to contact you.
The idea is to use distributed systems to give individuals and cities control over their own data. Right now, health insurance companies and hospitals have primary control of an individual's health data, and banks get the most benefit from analyzing customer data. Individuals have access to the information but there's no easy way to put it to good use. If smaller, local organizations--like credit unions--could create a secure platform for people to manage their own data, this would shift decision-making and control to people and communities instead of national corporations. Increasing local control of data would allow leaders and people to figure out solutions that fit the needs of their communities, instead of using a one-size-fits-all approach. Pentland used the example of the Upper Peninsula of Michigan and Boston. He grew up in a rural community but now lives in an international, urban, tech-centric city "The rules here are totally different, and what works for the Upper Peninsula does not work here," he said. "The idea is to handle local conditions locally and coordinate globally so cities can learn from each other but be responsible for themselves."
Branching controls code deployment and can regulate whether a feature gets deployed. But this is only a gross, binary control that can turn on and off the feature’s availability. Using only branching to control feature deployments limits a team’s ability to control when code gets deployed compared to when product leaders enable it for end-users. There are times product owners and development teams should deploy features and control access to them at runtime. For example, it’s useful to experiment and test features with specific customer segments or with a fraction of the user base. Feature flagging is a capability and set of tools that enable developers to wrap features with control flags. Once developers deploy the feature’s code, the flags enable them to toggle, test, and gradually roll out the feature with tools to control whether and how it appears to end-users. Feature flagging enables progressive delivery by turning on a feature slowly and in a controlled way. It also drives experimentation. Features can be tested with end-users to validate impact and experience. Jon Noronha, VP Product at Optimizely, says, “Development teams must move fast without breaking things.
“Reinforcement learning entails an agent, action and reward,” said Ankur Taly, who is the head of data science at Fiddler. “The agent, such as a robot or character, interacts with its surrounding environment and observes a specific activity, responding accordingly to produce a beneficial or desired result. Reinforcement learning adheres to a specific methodology and determines the best means to obtain the best result. It’s very similar to the structure of how we play a video game, in which the agent engages in a series of trials to obtain the highest score or reward. Over several iterations, it learns to maximize its cumulative reward.” In fact, some of the most interesting use cases for reinforcement learning have been with complex games. Consider the case of DeepMind’s AlphaGo. The system used reinforcement learning to quickly understand how to play Go and was able to beat the world champion, Lee Sedol, in 2016 (the game has more potential moves than the number of atoms in the universe!) But there have certainly been other applications of the technology that go beyond gaming. To this end, reinforcement learning has been particularly useful with robotics.
The idea behind developing a data strategy is to make sure all data resources are positioned in such a way that they can be used, shared, and moved easily and efficiently. Data is no longer a byproduct of business processing – it’s a critical asset that enables processing and decision making. A data strategy helps by ensuring that data is managed and used as an asset. It provides a common set of goals and objectives across projects to ensure data is used both effectively and efficiently. A data strategy establishes common methods, practices, and processes to manage, manipulate, and share data across the enterprise in a repeatable manner. While most companies have multiple data management initiatives underways (metadata, master data management, data governance, data migration, modernization, data integration, data quality, etc.), most efforts are focused on point solutions that address specific project or organizational needs. A data strategy establishes a road map for aligning these activities across each data management discipline in such a way that they complement and build on one another to deliver greater benefits.
Most commercial software engineering tasks out there do not start out with a clean slate. There is an existing application, written using a certain computer language(s), relying on a set of frameworks and libraries, and running on top of some operating system(s). We take it upon ourselves (or our teams) to change that existing application so that it meets some requirement(s), such as developing a new feature, fixing an existing bug, etc. Simultaneously we are required to continue meeting all the existing (un)documented requirements, and maintain the existing behavior as much as possible. And, as every junior software engineer finds out on their first day on the job, writing a piece of code to solve a simple computer science problem (or copying the answer from StackOverflow) is nowhere near the level of complexity of solving that same problem within a large and intricate system. Borrowing from the financial industry, let’s define Understandability: “Understandability is the concept that a system should be presented so that an engineer can easily comprehend it.” The more understandable a system is, the easier it will be for engineers to change it in a predictable and safe manner.
For Dickey and other WLAN professionals, the pandemic has demonstrated the critical importance of wireless communications. Nearly two-thirds of American workers – double the number from early March – are doing their jobs via home wireless, according to a Gallup Poll survey. Cisco, in its latest earnings report, announced that 95% of its employees are working from home. That means WLAN pros have had to shift their attention from maintaining corporate networks to remotely assisting workers, many of whom are non-technical, in getting their home networks up to speed and securely connected to corporate assets. Tam Dell'Oro, founder and CEO of the Dell'Oro Group, surveyed about 20 enterprise network managers and WLAN distributors, and reports that new in-building deployments have pretty much stopped cold. She adds that with WLAN pros charged with setting up and securing at-home workers, "remote access devices, particularly those with higher WAN connectivity and higher security, are flying off the shelf." IDC analyst Brandon Butler says the 2020 forecast for the WLAN industry has been downgraded from the 5.1% growth rate predicted prior to the pandemic to a 2.3% decline.
Good API documentation does not happen by accident. It takes clear guidelines, a consistent team effort, stringent peer review and a commitment to maintain documentation throughout an API's lifecycle. Some top API documentation best practices you should implement include: Include all necessary components. A complete documentation package usually has sections on authentication, error messages, resource usage and terms of acceptable use policies, and a comprehensive change log. Some API documentation also includes a series of guides that provides detailed examples of API use and use cases. Know the intended audience. Tailor API documentation for the intended audience. If the documentation is intended for novice developers, focus on things like tutorials, samples and guides. If the documentation is intended for experienced developers, build up reference material detailing syntax, parameters, arguments and response details. Consider how to include vested non-developers, such as project managers or even CTOs, in the API documentation.
With ‘Information Technology’ we normally designate our modern digital equipment. However, for millennia humanity has used information technologies to record and transmit information. To underline the significance of information technology, the difference between prehistory and history lies in the use of information technology — the ‘history era’ is synonymous with the ‘information age’. Floridi argues that recently we have entered the era of hyperhistory with the invention of the computer. The difference between hyperhistory and history is that in history ITs are only recording and transmitting information, in hyperhistory computers have the capability to process information. As a basic function, computers are able to store information, and this already makes a big difference with the labor intensive recording of data until the sixties or seventies. Moreover, based on this information the computer can process this information and make computations that beforehand were the prerogative of humans. As a side remark, the term ‘computer’ until the nineteenth century was synonymous with a person that ‘performs calculations’, not a machine.
Quote for the day: