With the increased availability of data through sensors, interconnected mobile devices, social media, and private or public spatial data sets, the demand for the seamless integration of spatial information into data-driven decision-making processes has reached a new high. We consider spatial data as any kind of data supplemented with additional information about the location and shape of objects on earth. One simple example would be the general information of companies, buildings, persons, and/or vehicles such as name, type, and color, with supplementary X, Y- coordinate values defining their current or permanent position on the earth respectively. But real-world objects often have much more complex forms.
The two main considerations regarding storage are: how to store and where to store. How to store your data depends on your overall use case. The type of data you produce will determine the type of database you will require. If you have structured data, then a relational database such as SQL Server or MySQL are your best bet. On the other hand, if you have unstructured data such as images, videos, or tweets, then you probably need a schema-less database such as Hadoop or MongoDB. Or maybe, like some systems I’ve worked on, you need both. I’m not suggesting that Product Managers dictate the type of DB or the architecture of the data tier. That’s the role of your Architecture and IT team. However, it IS our job as Product Managers to define clear use cases and convey those to our technical team, so they can implement the right infrastructure for your product.
Blockchain's ability to enable peer-to-peer transactions - financial or otherwise - that are simultaneously secure, indelible and almost instant has the potential to disrupt massive segments of society that are built around these kinds of interactions, including healthcare, financial services, real estate and almost anything that requires a transaction between two or more parties. And like the early days of the Internet, it is hard to imagine the full impact, the myriad use cases, and entirely new business models that will eventually emerge as result of blockchain's adoption. But that is mostly a reflection of our lack of imagination, perspective, and foresight - not of the potential impact of the technology. Recording artist Imogen Heap recently penned a Harvard Business Review articlethat did a fantastic job of taking blockchain from an abstract concept to a real-life potential use case
Your IT infrastructure is essential to your enterprise’s success. However, it is often not given enough consideration as a key foundation component requiring true planning. It is typically addressed as an afterthought requiring some ‘bolt on’ later on down the road. It is important to put in some quality time in planning for an infrastructure that supports automation, scalability, availability (high availability where necessary), redundancy and security. All of these help support continuous integration with geographically dispersed resources, micro services, virtualized servers, storage, networks and containerization. Also, let’s not forget the elephant in the room – cloud enablement, as it can be an integral part of your current and long-term strategy.
Understanding of Bitcoin’s mechanics remains in short supply. Even the most enlightened laypeople treat Bitcoin and its relations like the “here be monsters” zones on antique maps. Weird shit is happening out there! But the intricacies of the system elude even insiders. I’d wager that only a fraction of the people who currently own a collective $50 billion-plus worth of the digital currency could intelligibly explain what happened last week, when Bitcoin spun off a kind of mutated clone of itself called Bitcoin Cash. Warren Buffett famously advised us never to invest in anything that we don’t understand. Bitcoin investors are paying Buffett no mind. Of course, it’s not as if the workings of regular currency (what cryptocurrency devotees refer to as “fiat money”) are universally comprehended, either.
Law enforcement agencies then would need to determine how they could integrate data acquisition and analytics on a daily basis to sharpen risk management, and do so in and around locations during large-scale crisis situations so their internal systems could support decision making. Adding that there was no cookie-cutter approach, Lopez said systems and methodologies would need to extract data and be able to distinguish innocuous events from real and serious threats. "We must acquire the ability to distill the noise and sharpen our focus," he said. This further emphasised the importance of partnership between the private sector and law enforcement, which would ensure the necessary capabilities were developed "to fight the new order of threats".
Some 62% of security experts believe that artificial intelligence (AI) will be weaponized and used for cyberattacks within the next 12 months, a Cylance survey released Tuesday found. This makes the growth of AI a double-edge sword, according to Cylance’s blog post on the finding. “While AI may be the best hope for slowing the tide of cyberattacks and breaches, it may also create more advanced attacker tactics in the short-term,” the post said. While the majority of those surveyed said that they felt there was a high possibility that AI would be used offensively, 32% said that there wasn’t a possibility of that happening, and 6% said they didn’t know. It was noted, however, that the potential use of AI as an offensive weapon wouldn’t slow the use of AI as a defensive tool.
Scaling of agile methods for large projects is a difficult task14. Teams are large, efforts difficult, and deadlines tight. Multiple releases often have to be worked in parallel across geographical distances in distributed environments. The outputs of these releases must be planned, coordinated and synchronized in such a manner that development. integration and test flows naturally. Based on the recent data shown in Figure 2, the organizations pursuing such large developments are using either an agile-at-scale (48%) or a hybrid (52%) methodology. This represents a dramatic change from two years ago when agile-at-scale usage was about half of what it is today. Such growth has been propelled primarily by the fan-out of agile methods enterprise-wide.
Buying COTS (Commercial-off-the-shelf) products and to host and connect it to hundreds of other systems has been the primary focus of Bank’s IT department. At times, these projects run over several years and come at a very steep cost. Things are changing though: Workday for HR, Salesforce for CRM and the list goes on and on. Within Banking core systems space, new cloud-based systems are set to alter the significant servicing and origination value chains respectively. The implementation timeframe for these cloud-based systems is less than two-thirds of a comparable on-prem system and the upkeep isn’t too shabby either. While some banks continue to operate as a traditional bank, others are evolving into a Bank + Technology shop. They have expanded their portfolio to provide B2B services to other banks and to consumers.
It’s one of those management innovations that causes a fair amount of pushback in most people, partly because they have such a potential to go wrong. As a person who has been on the receiving end of a manager who’s been obsessed with metrics and nothing else, I can understand those who resists the whole idea of KPIs. But they can be useful, as long as they’re used properly. Put simply, KPIs are good servants but bad masters (to steal a quote about money). They can give useful information about how a program is going and the regular assessment of metrics can act as a reminder to assess how well a program is working in a wider sense. So they aren’t necessarily toxic – as with most things, the devil is in the details. Here are some ways that I’ve seen KPIs become toxic in EA departments.
Quote for the day:
"By failing to prepare, you are preparing to fail." -- Benjamin Franklin