### Les derniers tutoriaux - tuto

Vote utilisateur: 0 / 5

k-means is a kind of clustering algorithms, which belong to the family of unsupervised machine learning models. It aims at finding $k$ groups of similar data (clusters) in an unlabeled multidimensional dataset.

## The k-means minimization problem

Let $(x_1, ..., x_n)$ be a set of $n$ observations with $x_i \in \mathbb{R}^{d}$, for $1 \leq i \leq n$. The aim of the k-means algorithms is to find a disjoint partition $S={S_1, ..., S_k }$ of the $n$ observations into $k \leq n$ clusters, minimizing $D$ the within-cluster distance to center: $$D(S) = \sum_{i=1}^k \sum_{x \in S_i} | x - \mu_i |^2$$ where $\mu_i$ is the $i$-th cluster center (i.e. the arithmetic mean of the cluster observations): $\mu_i = \frac{1}{|S_i|} \sum_{x_j \in S_i} x_j$, for $1 \leq i \leq n$.

Unfortunately, finding the exact solution of this problem is very tough (NP-hard) and a local minimum is generally sought using a heuristic.

## The algorithm

Here is a simple description of the algorithm taken from the book "Data Science from Scratch" by Joel Grus (O'Reilly):

1. Start with a set of k-means, which are $k$ points in $d$-dimensional space.
2. Assign each point to the mean to which it is closest.
3. If no point’s assignment has changed, stop and keep the clusters.
4. If some point’s assignment has changed, recompute the means and return to step 2.

This algorithm is an iterative refinement procedure. In his book "Python Data Science Handbook" (O'Reilly), Jake VanderPlas refers to this algorithm as kind of Expectation–Maximization (E–M). Since step 1 is the algorithm initialization and step 3 the stopping criteria, we can see that the algorithm consists in only two alternating steps:

step 2. is the Expectation:

"updating our expectation of which cluster each point belongs to".

step 4. is the Maximization:

"maximizing some fitness function that defines the location of the cluster centers".

This is described with more details in the following link.

An interesting geometrical interpretation is that step 2 corresponds to partitioning the observations according to the Voronoi diagram generated by the centers computed previously (either on step 1 or 4). This is also why the standard k-means algorithm is also called Lloyd's algorithm, which is a Voronoi iteration method for finding evenly spaced sets of points in subsets of Euclidean spaces.

### Voronoi diagram

Let us have a look at the Voronoi diagram generated by the $k$ means.

Vote utilisateur: 0 / 5

In this post we are simply going to retrieve the restaurants from the city of Lyon-France from Open Street Map, and then plot them with Bokeh.

Downloading the restaurants name and coordinates is done using a fork of the great OSMnx library. The OSM-POI feature of this fork will probably soon be added to OSMnx from what I understand (issue).

First we create a fresh conda env, install jupyterlab, bokeh (the following lines show the Linux way to do it but a similar thing could be done with Windows):

$conda create -n restaurants python=3.6$ source activate restaurants
$conda install jupyterlab$ conda install -c bokeh bokeh
$jupyter labextension install jupyterlab_bokeh$ jupyter lab osm_restaurants.ipynb


The jupyterlab extension allows the rendering of JS Bokeh content.

Then we need to install the POI fork of OSMnx:

$git clone Cette adresse e-mail est protégée contre les robots spammeurs. Vous devez activer le JavaScript pour la visualiser. document.getElementById('cloak02d89638c370d88001db7e408026abca').innerHTML = ''; var prefix = '&#109;a' + 'i&#108;' + '&#116;o'; var path = 'hr' + 'ef' + '='; var addy02d89638c370d88001db7e408026abca = 'g&#105;t' + '&#64;'; addy02d89638c370d88001db7e408026abca = addy02d89638c370d88001db7e408026abca + 'g&#105;th&#117;b' + '&#46;' + 'c&#111;m'; document.getElementById('cloak02d89638c370d88001db7e408026abca').innerHTML += addy02d89638c370d88001db7e408026abca; :HTenkanen/osmnx.git$ cd osmnx/
osmnx $git checkout 1-osm-poi-dev osmnx$  pip install .
osmnx \$  cd ..


And we are ready to run the notebook:

jupyter lab osm_restaurants.ipynb


In [1]:
import osmnx as ox
place = "Lyon, France"
restaurant_amenities = ['restaurant', 'cafe', 'fast_food']
restaurants = ox.pois_from_place(place=place,
amenities=restaurant_amenities)[['geometry',
'name',
'amenity',
'cuisine',
'element_type']]


We are looking for 3 kinds of amenity related to food: restaurants, cafés and fast-foods. The collected data is returned as a geodataframe, which is basically a Pandas dataframe associated with a geoserie of Shapely geometries. Along with the geometry, we are only keeping 4 columns:

• restaurant name,
• amenity type (restaurant, café or fast_food),
• cuisine type and
• element_type (OSM types: node, way relation).
In [2]:
restaurants.head()

Out[2]:

<

div id="notebook" class="border-box-sizing" tabindex="-1">

<

div id="notebook-container" class="container">

geometrynameamenitycuisineelement_type
25733699 POINT (4.8634608 45.7439964) Le Petit Comptoir restaurant international node
25733700 POINT (4.8689407 45.7410332) L'Esprit Bistro restaurant NaN node
26641424 POINT (4.8346121 45.7569848) Comptoir des Marronniers restaurant NaN node
33065934 POINT (4.7732746 45.7393443) Auberge de la Vallée restaurant NaN node
35694312 POINT (4.8342288 45.7581985) McDonald's fast_food burger node
In [3]:
ax = restaurants.plot()


Vote utilisateur: 5 / 5

A script for SQL Server to be run as sysadmin or a user that have enought priviledges on all databases to list all tables :

 CREATE PROCEDURE [dbo].[sp_get_tables_sizes_all_dbs] AS BEGIN --sqlserver 2005 + IF (SELECT count(*) FROM tempdb.sys.objects WHERE name = '##TABLESIZES_ALLDB')=1 BEGIN DROP TABLE ##TABLESIZES_ALLDB; END CREATE TABLE ##TABLESIZES_ALLDB ( snapdate datetime, srv nvarchar(1000), sv nvarchar(1000), _dbname nvarchar(1000), nomTable nvarchar(1000), "partition_id" bigint, "partition_number" int, lignes bigint, "memory (kB)" bigint, "data (kB)" bigint, "indexes (kb)" bigint, "data_compression" int, data_compression_desc nvarchar(1000) ) EXECUTE master.sys.sp_MSforeachdb 'USE [?]; insert into ##TABLESIZES_ALLDB select getdate() as snapdate,cast(serverproperty(''MachineName'') as nvarchar(1000)) svr,cast(@@servicename as nvarchar(1000)) sv, ''?'' _dbname, nomTable= object_name(p.object_id),p.partition_id,p.partition_number, lignes = sum( CASE When (p.index_id < 2) and (a.type = 1) Then p.rows Else 0 END ), ''memory (kB)'' = cast(ltrim(str(sum(a.total_pages)* 8192 / 1024.,15,0)) as float), ''data (kB)'' = ltrim(str(sum( CASE When a.type <> 1 Then a.used_pages When p.index_id < 2 Then a.data_pages Else 0 END ) * 8192 / 1024.,15,0)), ''indexes (kb)'' = ltrim(str((sum(a.used_pages)-sum( CASE When a.type <> 1 Then a.used_pages When p.index_id < 2 Then a.data_pages Else 0 END) )* 8192 / 1024.,15,0)),p.data_compression, p.data_compression_desc from sys.partitions p, sys.allocation_units a ,sys.sysobjects s where p.partition_id = a.container_id and p.object_id = s.id and s.type = ''U'' -- User table type (system tables exclusion) group by p.object_id,p.partition_id,p.partition_number,p.data_compression,p.data_compression_desc order by 3 desc' ; SELECT * FROM ##TABLESIZES_ALLDB END GO 
Clics: 749

Vote utilisateur: 4 / 5

Depuis la version 9i d'Oracle, la gestion de la mémoire peut se faire de manière automatique.

Le paramètre PGA_AGGREGATE_TARGET remplacant les paramètres SORT_AREA_SIZE et HASH_AREA_SIZE utilisé en 8i.

Il faut rappeler que la PGA est une zone mémoire privée où les processus allouent de la mémoire pour les opérations de tris, de hash ou de merge. De ce fait la zone de PGA est séparée de la SGA (System Global Area). Une troisième zone de mémoire, la UGA (User Global Area), maintient l'information sur l'état des sessions et des curseurs. En mode dédié, les processus alloue la zone UGA dans la PGA alors qu'en mode partagé la zone UGA est allouée dans la SGA (dans la LARGE POOL plus exactement).

Clics: 12128

### Sous-catégories

Articles traitant de l'intégration de données

Des tutoriaux et cours gratuits sur Oracle

Tutoriaux sur Unix et les shells scripts