#GeoDev #Ideas

How to Master Geospatial Analysis with Python

For a long time, there was no reference book available with the most-used geospatial Python libraries, nor one that marks the transition from Python 2 to 3 – an important landmark because Python 3 has fixed many issues under the hood, which is why it’s a major language update. Finally, a book that included both raster and vector data analysis using Jupyter Notebooks was very much needed as this has become the new standard for writing code and visualizing spatial data in an interactive web environment, instead of working with a code editor.

A new book, called “Mastering Geospatial Analysis with Python” (Packt Publishing), tries to fill this gap. Whereas other geospatial Python usually cover only a small sample of Python libraries, or even one type of application, this book takes a more holistic approach covering a wide range of tools available for interacting with geospatial data. This is done through short software tutorials that show how to use a dataset for real-world everyday data management, analysis and visualization problems.

A geospatial analyst toolkit

The book starts with an introduction the most powerful Python libraries. One example is GDAL, whose read and write capabilities are used throughout the industry on a daily basis, whether as a part of desktop software or as a standalone solution. Also included in the book are new, more Pythonic libraries built on top of GDAL, such as Rasterio, GeoPandas and Fiona. Brand new geospatial libraries such as Esri’s ArcGIS API for Python, Carto’s CARTOFrames and Mapbox’ MapboxGL-Jupyter that haven’t been covered anywhere else yet. These libraries are examples of how geospatial companies are releasing APIs to interact with a cloud-based infrastructure to store, visualize, analyze and edit geospatial data.

It´s no surprise that geospatial companies release these types of APIs. Taking Google and Amazon’s platforms as an example, spatial data only makes sense if you have the right platform and tools to manage that data. This book enables you to try out different geospatial platforms and APIs, so you can compare capabilities of each one. Raster and vector data analysis are another important topic. Vector and raster data analysis is still performed on a daily basis and therefore should be part of every geospatial analyst’s toolkit. Apart from spatial analysis, the book teaches you how to create a geospatial REST API, process data in the cloud and create a web mapping application.

If you interact on a daily basis with spatial databases, Python has got you covered. With some scripting experience under your belt, you’ll learn how to manage databases such as PostGIS, SQL Server and Spatialite. It´s no surprise that PostGIS is gaining more territory and is mentioned more often in job ads for geospatial analysts.

The future

Finally, some last words about the future of geospatial analysis. The adoption of AI, machine learning and blockchain technology is already transforming geospatial technology. Expect even more data types, formats, standards and converging technologies where geospatial as-we-know-it will have its place. It’s only recently that geospatial technology has been gaining interest from other domains. This book hopes to cross the bridge between different domains, showing that Python is an excellent way of getting into the geospatial domain and discover its many great tools.

Say thanks for this article (1)
The community is supported by:
Become a sponsor
#GeoDev
#Contributing Writers #GeoDev #Ideas #News #Science
The New Standard in EO – STAC: The Application of STAC API Browser
Sebastian Walczak 12.6.2024
AWESOME 1
#Business #Featured #GeoDev
Geo Addressing Decoded Part 2: Beyond Coordinates – Exploring the Depth and Impact of Geo Addressing
Aleks Buczkowski 05.2.2024
AWESOME 4
#Business #Featured #GeoDev
Geo Addressing Decoded, Part 4: Exploring the Applications of Geo Addressing Solutions
Aleks Buczkowski 06.25.2024
AWESOME 2
Next article
#Business #Featured

After launching its first fully owned maps this week, Apple is now officially ‘a mapping company’

When Apple Maps launched back in 2012 is was far from being perfect. The quality of mapping data was terrible, the app was lacking some key features, Australian police warned that using Apple Maps can cause life-threatening situations. Over the past 6 years, a lot has changed, but most importantly Apple realized that map making is much more complex than they have initially imagined.

Apple Maps – 2012 – Park in the center of Warsaw, Poland. The Polish language has nothing to do with Chinese.

The company started thinking about launching their mapping solution already in 2009 when it quietly acquired a map API provider Placebase. In 2010 Apple bought Poly9, a Canadian start-up that specialized in connecting mapping data with other sources to create Google Earth-like visualizations. Finally, in 2011 they’ve acquired C3 Technologies – Swedish Saab’s former military solution that automatically builds 3D city models based on spatial aerial imaginary captured from airplanes.

Why having all these technologies and resources could not save Apple Maps?

Everyone who was ever involved in a large-scale commercial map-making process understands how complex it is to have up-to-date, topologically correct geographic data with complete coverage and high reliability. It is even more challenging when you try to achieve that by combining a lot of different data sources. Hundreds of attributes, object libraries, ontologies, languages, various topology rules, map projections, reference systems, no consistent age of the data and quality measures… database nightmare.

Most of the navigation layers in Apple Maps are based on TomTom’s data and OpenStreetMap in countries where TomTom doesn’t have decent coverage. Additionally, the company uses multiple local data providers like Waze for Israel, AND for some areas in Eastern Europe and plenty of POI data providers such as Foursquare (a full list of data sources that Apple uses is available on the official acknowledgments page).

Combining OSM, TomTom and mixing it with various data suppliers was a molotov cocktail that could not mean anything good and even got Apple’s senior director responsible for delivering maps Richard Williamson fired just a few weeks after the launch of the product.

Technically speaking, at that stage Apple was still not a mapping company. It was data integrator and software provider. At that moment, everyone in the industry thought that the only way forward was a big acquisition of one of the global independent mapping data providers: TomTom or HERE. Investment from Apple would allow them to fill in coverage gaps in less developed countries and improve the technology stack. Such a move would allow Apple to have high-quality global maps available already 3-4 years ago. But Apple decided to go its own way and it had a couple of good reasons to do so.

Buying TomTom or HERE was not an option.

First of all, Apple doesn’t like to take over big organizations. It prefers talent acquisitions of small engineering teams or innovative ideas and visions (here is the full list of geospatial startups Apple acquired). The company’s culture is all about bringing together the brightest minds and by these means having a full control over who it takes on board.

But there is also a much more serious reason. Navteq – the mapping company acquired by Nokia in 2008 and rebranded to HERE, had been founded in 1985. The origins of TeleAtlas – which is the company behind TomTom maps, goes back to 1984. These two companies have over 30 years experience in the industry, but on the other hand, many of their processes are nearly unchanged for the past 10-15 years, which is a huge burden.

Apple likes to have things polished not only from the outside but also process-wise. When you buy a small startup, it is easy to clean up and reuse the code they’ve created. When you acquire a 30 years old tech company that hires several thousand people, it is much more challenging.

Making maps on their own

Apple spent the entire 2013 cleaning the maps and thinking of how to take the project forward. The only possibility left on the table was to start from a scratch and build the entire map making processes from the ground up. Two years later the first Apple-branded mapping cars (similar to the one operated by Google, TomTom and HERE) were spotted on the streets across the US. A year later, in 2016 Apple opened a map data production center in Hyderabad, India with 4000 vacancies to be covered. In addition, the company started to hires local teams around the world to develop the technology and improve the datasets.

Last week TechCrunch published a major report about the Apple’s newly designed map making process. Interestingly, the key points of the process described look exactly the same as the processes of Google, HERE and TomTom. Mapping cars, combined with satellite imagery, probe data from iPhones, and local field teams, all sent to India for editing. Some of these companies are able to automate more processes than others but the baseline is nearly the same.

Apple is becoming a real mapping company

This week we are witnessing a quiet however very significant milestone in the history of Apple Maps. With the launch of the new ‘iOS 12 public preview’, the company is officially publishing for the first time, the new version of their map produced entirely with their own resources. Although the new data will be available only for the San Francisco Bay Area, with that event we can now officially call Apple ‘a mapping company’!

The past 9 years must have thought Apple some humbleness. Even after spending billions of dollars and years of development, today they are still able to publish data for just a small part of Northern California. It only proves the case that making high-quality maps (especially for navigation) is a complex and challenging process… even for Apple.

 

Read on
Search