satsense.features

class satsense.features.Feature(window_shapes, **kwargs)

Bases: abc.ABC

Feature superclass.

Parameters:
  • window_shapes (list[tuple]) – List of tuples of window shapes to calculate the feature on
  • **kwargs (dict) – Keyword arguments for the feature
base_image
base_image = None
static compute(window, **kwargs)[source]

Compute the feature on the window This function needs to be set by the implementation subclass compute = staticmethod(my_feature_calculation) :param window: The shape of the window :type window: tuple[int] :param **kwargs: The keyword arguments for the compustation :type **kwargs: dict

indices

The indices for this feature in a feature set .. seealso:: FeatureSet

size = None
windows

Returns the windows this feature uses for calculation :returns: :rtype: tuple[tuple[int]]

class satsense.features.FeatureSet[source]

Bases: object

FeatureSet Class

The FeatureSet class can be used to bundle a number of features together. this class then calculates the indices for each feature within a vector of all features stacked into a single 3 dimensional matrix.

items
add(feature, name=None)[source]
Parameters:
  • feature (Feature) – The feature to add to the set
  • name (str) – The name to give the feature in the set. If none the features class name and length is used
  • Returns
    name : str
    The name of the added feature
    feature : Feature
    The added feature
remove(name)[source]

Remove the feature from the set :param name: The name of the feature to remove :type name: str

Returns:Wether the feature was succesfully removed
Return type:bool
index_size

The size of the index

base_images

list[str] List of base images that was used to calculate these features

class satsense.features.HistogramOfGradients(window_shapes, **kwargs)[source]

Bases: satsense.features.Feature

Histogram of Oriented Gradient Feature Calculator

The compute method calculates the feature on a particular window this returns the 1st and 2nd heaved central shift moments, the orientation of the first and second highest peaks and the absolute sine difference between the orientations of the highest peaks

Parameters:
  • window_shapes (list[tuple]) – The window shapes to calculate the feature on.
  • bins (int) – The number of bins to use. The default is 50
  • kernel (typing.Callable) – The function to use for smoothing. The default is scipy.stats.norm().pdf.
  • bandwidth (float) – The bandwidth for the smoothing. The default is 0.7
size

The size of the feature vector returned by this feature

Type:int
base_image

The name of the base image used to calculate the feature

Type:str

Example

Calculating the HistogramOfGradients on an image using a generator:

from satsense import Image
from satsense.generators import FullGenerator
from satsense.extract import extract_feature
from satsense.features import HistogramOfGradients

windows = ((50, 50), )
hog = HistogramOfGradients(windows)

image = Image('test/data/source/section_2_sentinel.tif',
              'quickbird')
image.precompute_normalization()
generator = FullGenerator(image, (10, 10))

feature_vector = extract_feature(hog, generator)
base_image = 'grayscale'
size = 5
static compute(window, bins=50, kernel=None, bandwidth=0.7)

Calculate the hog features on the window.

Features are the 1st and 2nd order heaved central shift moments, the angle of the two highest peaks in the histogram, the absolute sine difference between the two highest peaks.

Parameters:
  • window (numpy.ndarray) – The window to calculate the features on (grayscale).
  • bands (dict) – A discription of the bands used in the window.
  • bins (int) – The number of bins to use.
  • kernel (typing.Callable) – The function to use for smoothing. The default is scipy.stats.norm().pdf.
  • bandwidth (float) – The bandwidth for the smoothing.
Returns:

The 5 HoG feature values.

Return type:

numpy.ndarray

class satsense.features.Pantex(window_shapes, **kwargs)[source]

Bases: satsense.features.Feature

Pantext Feature Calculator

The compute method calculates the feature on a particular window this returns the minimum of the grey level co-occurence matrix contrast property

Parameters:
  • window_shapes (list) – The window shapes to calculate the feature on.
  • maximum (int) – The maximum value in the image.
size

The size of the feature vector returned by this feature

Type:int
base_image

The name of the base image used to calculate the feature

Type:str

Example

Calculating the Pantex on an image using a generator:

from satsense import Image
from satsense.generators import FullGenerator
from satsense.extract import extract_feature
from satsense.features import Pantex

windows = ((50, 50), )
pantex = Pantex(windows)

image = Image('test/data/source/section_2_sentinel.tif',
              'quickbird')
image.precompute_normalization()
generator = FullGenerator(image, (10, 10))

feature_vector = extract_feature(pantex, generator)
base_image = 'gray_ubyte'
size = 1
static compute(window, maximum=255)

Calculate the pantex feature on the given grayscale window.

Parameters:
  • window (numpy.ndarray) – A window on an image.
  • maximum (int) – The maximum value in the image.
Returns:

Pantex feature value.

Return type:

float

class satsense.features.NDXI(window_shapes, **kwargs)

Bases: satsense.features.Feature

The parent class of the family of NDXI features.

Parameters:window_shapes (list) – The window shapes to calculate the feature on.
compute

Used by autodoc_mock_imports.

size = 1
class satsense.features.NirNDVI(window_shapes, **kwargs)[source]

Bases: satsense.features.NDXI

The infrared-green normalized difference vegetation index.

For more information see [2].

Parameters:window_shapes (list) – The window shapes to calculate the feature on.

Notes

[2]https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index
base_image = 'nir_ndvi'
class satsense.features.RgNDVI(window_shapes, **kwargs)[source]

Bases: satsense.features.NDXI

The red-green normalized difference vegetation index.

For more information see [3].

Parameters:window_shapes (list) – The window shapes to calculate the feature on.

Notes

[3]Motohka, T., Nasahara, K.N., Oguma, H. and Tsuchida, S., 2010. “Applicability of green-red vegetation index for remote sensing of vegetation phenology”. Remote Sensing, 2(10), pp. 2369-2387.
base_image = 'rg_ndvi'
class satsense.features.RbNDVI(window_shapes, **kwargs)[source]

Bases: satsense.features.NDXI

The red-blue normalized difference vegetation index.

For more information see [4].

Parameters:window_shapes (list) – The window shapes to calculate the feature on.

Notes

[4]Tanaka, S., Goto, S., Maki, M., Akiyama, T., Muramoto, Y. and Yoshida, K., 2007. “Estimation of leaf chlorophyll concentration in winter wheat [Triticum aestivum] before maturing stage by a newly developed vegetation index-RBNDVI”. Journal of the Japanese Agricultural Systems Society (Japan).
base_image = 'rb_ndvi'
class satsense.features.NDSI(window_shapes, **kwargs)[source]

Bases: satsense.features.NDXI

The snow cover index.

Parameters:window_shapes (list) – The window shapes to calculate the feature on.
base_image = 'ndsi'
class satsense.features.NDWI(window_shapes, **kwargs)[source]

Bases: satsense.features.NDXI

The water cover index.

Parameters:window_shapes (list) – The window shapes to calculate the feature on.
base_image = 'ndwi'
class satsense.features.WVSI(window_shapes, **kwargs)[source]

Bases: satsense.features.NDXI

The soil cover index.

Parameters:window_shapes (list) – The window shapes to calculate the feature on.
base_image = 'wvsi'
class satsense.features.Lacunarity(windows=((25, 25), ), box_sizes=(10, 20, 30))[source]

Bases: satsense.features.Feature

Calculate the lacunarity value over an image.

Lacunarity is a measure of ‘gappiness’ of the image. The calculation is performed following these papers:

Kit, Oleksandr, and Matthias Luedeke. “Automated detection of slum area change in Hyderabad, India using multitemporal satellite imagery.” ISPRS journal of photogrammetry and remote sensing 83 (2013): 130-137.

Kit, Oleksandr, Matthias Luedeke, and Diana Reckien. “Texture-based identification of urban slums in Hyderabad, India using remote sensing data.” Applied Geography 32.2 (2012): 660-667.

base_image = 'canny_edge'
static compute(canny_edge_image, box_sizes)

Calculate the lacunarities for all box_sizes.

class satsense.features.Sift(windows, kmeans: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa1b7d5c4a8>, normalized=True)[source]

Bases: satsense.features.Feature

Scale-Invariant Feature Transform calculator

First create a codebook of SIFT features from the suplied images using from_images. Then we can compute the histogram of codewords for a given window.

See the opencv SIFT intro for more information

Parameters:
  • window_shapes (list) – The window shapes to calculate the feature on.
  • kmeans (sklearn.cluster.MiniBatchKMeans) – The trained KMeans clustering from opencv
  • normalized (bool) – If True normalize the feature by the total number of clusters

Example

Calculating the Sift feature on an image using a generator:

from satsense import Image
from satsense.generators import FullGenerator
from satsense.extract import extract_feature
from satsense.features import Sift

windows = ((50, 50), )

image = Image('test/data/source/section_2_sentinel.tif', 'quickbird')
image.precompute_normalization()

sift = Sift.from_images(windows, [image])

generator = FullGenerator(image, (10, 10))

feature_vector = extract_feature(sift, generator)
print(feature_vector.shape)
base_image = 'gray_ubyte'
static compute(window_gray_ubyte, kmeans: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa1b7d5c4a8>, normalized=True)

Calculate the Scale-Invariant Feature Transform feature

The opencv SIFT features are first calculated on the window the codewords of these features are then extracted using the previously computed cluster centers. Finally a histogram of these codewords is returned

Parameters:
  • window_gray_ubyte (ndarray) – The window to calculate the feature on
  • kmeans (sklearn.cluster.MiniBatchKMeans) – The trained KMeans clustering from opencv, see from_images
  • normalized (bool) – If True normalize the feature by the total number of clusters
Returns:

The histogram of sift feature codewords

Return type:

ndarray

classmethod from_images(windows, images: Iterator[satsense.image.Image], n_clusters=32, max_samples=100000, sample_window=(8192, 8192), normalized=True)[source]

Create a codebook of SIFT features from the suplied images.

Using the images max_samples SIFT features are extracted evenly from all images. These features are then clustered into n_clusters clusters. This codebook can then be used to calculate a histogram of this codebook.

Parameters:
  • windows (list[tuple]) – The window shapes to calculate the feature on.
  • images (Iterator[satsense.Image]) – Iterable for the images to calculate the codebook no
  • n_cluster (int) – The number of clusters to create for the codebook
  • max_samples (int) – The maximum number of samples to use for creating the codebook
  • normalized (bool) – Wether or not to normalize the resulting feature with regards to the number of clusters
class satsense.features.Texton(windows, kmeans: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa1b7a6d278>, normalized=True)[source]

Bases: satsense.features.Feature

Texton Feature Transform calculator

First create a codebook of Texton features from the suplied images using from_images. Then we can compute the histogram of codewords for a given window.

For more information see [1].

Parameters:
  • window_shapes (list) – The window shapes to calculate the feature on.
  • kmeans (sklearn.cluster.MiniBatchKMeans) – The trained KMeans clustering from opencv
  • normalized (bool) – If True normalize the feature by the total number of clusters

Example

Calculating the Texton feature on an image using a generator:

from satsense import Image
from satsense.generators import FullGenerator
from satsense.extract import extract_feature
from satsense.features import Texton

windows = ((50, 50), )

image = Image('test/data/source/section_2_sentinel.tif', 'quickbird')
image.precompute_normalization()

texton = Texton.from_images(windows, [image])

generator = FullGenerator(image, (10, 10))

feature_vector = extract_feature(texton, generator)
print(feature_vector.shape)

Notes

[1]Arbelaez, Pablo, et al., “Contour detection and hierarchical image segmentation,” IEEE transactions on pattern analysis and machine intelligence (2011), vol. 33 no. 5, pp. 898-916.
base_image = 'texton_descriptors'
static compute(descriptors, kmeans: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa1b7a6d278>, normalized=True)

Calculate the texton feature on the given window.

classmethod from_images(windows, images: Iterator[satsense.image.Image], n_clusters=32, max_samples=100000, sample_window=(8192, 8192), normalized=True)[source]

Create a codebook of Texton features from the suplied images.

Using the images max_samples Texton features are extracted evenly from all images. These features are then clustered into n_clusters clusters. This codebook can then be used to calculate a histogram of this codebook.

Parameters:
  • windows (list[tuple]) – The window shapes to calculate the feature on.
  • images (Iterator[satsense.Image]) – Iterable for the images to calculate the codebook no
  • n_cluster (int) – The number of clusters to create for the codebook
  • max_samples (int) – The maximum number of samples to use for creating the codebook
  • normalized (bool) – Wether or not to normalize the resulting feature with regards to the number of clusters