Take the 2-minute tour ×
Geographic Information Systems Stack Exchange is a question and answer site for cartographers, geographers and GIS professionals. It's 100% free, no registration required.

I'm new here and I'm sorry if I do something wrong.

I am struggling with a processing and hopefully I will be able to solve here.

I work with Remote Sensing applied to Forestry, especially working with LiDAR data. The idea is to use Scikit-image for tree top detection. Since I'm new in Python, I considered a great personal triumph to do the following:

  1. Import a CHM (with matplotlib);
  2. Run a gaussian filter (with scikit-image package);
  3. Run a maxima filter (with scikit-image package);
  4. Run the peak_local_max (with scikit-image package);
  5. Show the CHM with the local maxima (with matplotlib);

Now my problem. When I import with matplot, the image loses its geographic coordinates. So the coordinates I have are just basic image coordinates (i. e. 250,312). What I need is to get the value of the pixel under the local maxima dot in the image (red dots in the image). Here in the forum I saw one guy asking the same thing (How do I get the pixel value of a GDAL raster under an OGR point without NumPy?), but he already had the points in a shape file. In my case the points were computed with scikit-image (It is an array with the coordinates of each tree top). So I do not have the shape file.

In conclusion, what I want in the end is a txt file with the coordinates of each local maxima in geographic coordinates, for example:

525412 62980123 1150 ...

Local maxima (red dots) in a CHM

Thanks a lot for your attention.

share|improve this question

2 Answers 2

up vote 3 down vote accepted

Firstly, welcome to the site!

Numpy arrays don't have a concept of coordinate systems inbuilt into the array. For a 2D raster they are indexed by column and row.

Note I'm making the assumption that you're reading a raster format that is supported by GDAL.

In Python the best way to import spatial raster data is with the rasterio package. The raw data imported by rasterio is still a numpy array without access to coordinate systems, but rasterio also gives you access to an affine method on the source array which you can use to transform raster columns and rows to projected coordinates. For example:

import rasterio

# The best way to open a raster with rasterio is through the context manager
# so that it closes automatically

with rasterio.open(path_to_raster) as source:

    data = source.read(1) # Read raster band 1 as a numpy array
    affine = source.affine

# ... do some work with scikit-image and get an array of local maxima locations
# e.g.
# maxima = numpy.array([[0, 0], [1, 1], [2, 2]])
# Also note that convention in a numy array for a 2d array is rows (y), columns (x)

for point in maxima: #Loop over each pair of coordinates
    column = point[1]
    row = point[0]
    x, y = affine * (column, row)
    print x, y

# Or you can do it all at once:

columns = maxima[:, 1]
rows = maxima[:, 0]

xs, ys = affine * (columns, rows)

And from there you can write your results our to a text file however you like (I'd suggest taking a look at the inbuilt csv module for example).

share|improve this answer
    
Thank you very much. Looks like this may work. Since I am new at this, I still have to get familiar with a lot of things. Thanks for the patience. –  João Paulo Pereira Apr 10 at 15:14

From a quick look at matplotlib, i'd say you have to alter the axis scales after the import.

share|improve this answer
    
I think the problem is in scikit-image. When I run it scikit automatically changes to image coordinates. –  João Paulo Pereira Apr 10 at 11:08

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.