{ Les snippets de bioinfo-fr.net }


Couleur moyenne d'un vecteur de couleur en RGB + alpha [R]

30.07.2019     gdevailly      R RGB HTML couleurs 

  Petite fonction retournant la couleur moyenne d'un vecteur de couleur en espace RGB (rouge, vert, bleu) + canal alpha (transparence).
Il y a des espaces colorimétriques sans plus adapté à ce genre d'opérations.
mean_color <- function(mycolors) {
    R     <- strtoi(x = substr(mycolors,2,3), base = 16)
    G     <- strtoi(x = substr(mycolors,4,5), base = 16)
    B     <- strtoi(x = substr(mycolors,6,7), base = 16)
    alpha <- strtoi(x = substr(mycolors,8,9), base = 16)

    return(
        rgb(
            red   = round(mean(R)),
            green = round(mean(G)),
            blue  = round(mean(B)),
            alpha = round(mean(alpha)),
            maxColorValue = 255
        )
    )
}

mean_color(c("#000000FF", "#FFFFFFFF"))
mean_color(c("#FF0000FF", "#00FF00FF", "#FFFF00FF", "#FF0000FF"))
mean_color(rainbow(8))
0/5 - [0 vote]




doi2bib [Bash]

02.07.2019     Natir      doi bibtex 

  A bash function to get bibtex from doi
doi2bib ()
{
    curl -H "Accept: application/x-bibtex; charset=utf-8" https://data.crossref.org/${1}
}

# call doi2bib and write result in clipboard
doi2clip ()
{
    doi2bib ${1} | xclip -selection c
}

## Usage 
## > doi2bib 10.1093/bioinformatics/btz219 
## @article{Marijon_2019,
##     doi = {10.1093/bioinformatics/btz219},
##     url = {https://doi.org/10.1093%2Fbioinformatics%2Fbtz219},
##     year = 2019,
##     month = {mar},
##     publisher = {Oxford University Press ({OUP})},
##     author = {Pierre Marijon and Rayan Chikhi and Jean-St{\'{e}}phane Varr{\'{e}}},
##     editor = {John Hancock},
##     title = {Graph analysis of fragmented long-read bacterial genome assemblies},
##     journal = {Bioinformatics}
## }
4.75/5 - [3 votes]




recherche d une sequence mystere a l aide de biopython [Python]

04.04.2019     erwan06      biopython fasta arabette photosynthese 

  ce snippet reprend le code de recherche de la protéine codée vraisemblable comme cela est décrit sur le site http://arn16s.ovh (étape n°3) à partir de la séquence fasta "U91966". La protéine codée par l'une des six phases est alors affichée (Rubisco)
import Bio
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.Alphabet import generic_dna

#on importe la sequence mystere avec les outils en ligne

def seq_codante_de_phase(phase):
    meilleure_seq=compteur=i=j=start=stop=0
    longueur = len(phase)-1
    for compteur in range (i,longueur):
        if (phase[i] != 'M'):
            i += 1
        else:
            start = i
            j = start
            while (phase[j] != '*' and j < len(phase)-1):
                j += 1
            if (phase[j] == "*"):
                stop = j
            seq_codante = phase[i:j]
            if (len(seq_codante) > meilleure_seq):
                meilleure_seq = len(seq_codante)
                meilleur_start = start
                meilleur_stop = stop
                proteine_de_phase = phase[meilleur_start+1:meilleur_stop]
            i += 1
    return(proteine_de_phase)        

#main

mysterious_sequence = SeqIO.read(open('my_sequence.fasta'),
'fasta',
alphabet=generic_dna).seq


phase_1 = mysterious_sequence[0::]
phase_2 = mysterious_sequence[1::]
phase_3 = mysterious_sequence[2::]

#on renverse la sequence mystere pour creer les 3 dernieres phases 
complement_sequence = mysterious_sequence.complement()
reverse_sequence = complement_sequence[::-1]

#sequence à partir du dernier nucleotide en sens contraire
phase_4 = reverse_sequence[0::]
#sequence à partir de l'avant-dernier nucleotide en sens contraire
phase_5 = reverse_sequence[1::]
#sequence à partir de l'antépénultième nucleotide en sens contraire 
phase_6 = reverse_sequence[2::]

prot_1 = str(phase_1.translate())
prot_2 = str(phase_2.translate())
prot_3 = str(phase_3.translate())
prot_4 = str(phase_4.translate())
prot_5 = str(phase_5.translate())
prot_6 = str(phase_6.translate())

liste=[seq_codante_de_phase(prot_1),seq_codante_de_phase(prot_2),seq_codante_de_phase(prot_3),seq_codante_de_phase(prot_3),seq_codante_de_phase(prot_4),seq_codante_de_phase(prot_5),seq_codante_de_phase(prot_6)]
sorted(liste, key=len)
#la phase vraisemblable est la plus longue, donc la première de la liste triee par longueur
print (liste[1])
0/5 - [0 vote]




Average timeit [Python]

26.02.2019     Yo_O      timer python chronophage temps benchmark 

  Une fonction timeit permettant de placer @timeit en decorateur d'une methode à chronométrer.
Petit bonus : joue n fois la méthode décorée et rend le temps moyen.

source originale : https://github.com/realpython/materials/blob/master/pandas-fast-flexible-intuitive/tutorial/timer.py
import functools
import gc
import itertools
import sys
from timeit import default_timer as _timer


def timeit(_func=None, *, repeat=3, number=1000, file=sys.stdout):
    """Decorator: prints time from best of `repeat` trials.
    Mimics `timeit.repeat()`, but avg. time is printed.
    Returns function result and prints time.
    You can decorate with or without parentheses, as in
    Python's @dataclass class decorator.
    kwargs are passed to `print()`.
    >>> @timeit
    ... def f():
    ...     return "-".join(str(n) for n in range(100))
    ...
    >>> @timeit(number=100000)
    ... def g():
    ...     return "-".join(str(n) for n in range(10))
    ...
    """

    _repeat = functools.partial(itertools.repeat, None)

    def wrap(func):
        @functools.wraps(func)
        def _timeit(*args, **kwargs):
            # Temporarily turn off garbage collection during the timing.
            # Makes independent timings more comparable.
            # If it was originally enabled, switch it back on afterwards.
            gcold = gc.isenabled()
            gc.disable()

            try:
                # Outer loop - the number of repeats.
                trials = []
                for _ in _repeat(repeat):
                    # Inner loop - the number of calls within each repeat.
                    total = 0
                    for _ in _repeat(number):
                        start = _timer()
                        result = func(*args, **kwargs)
                        end = _timer()
                        total += end - start
                    trials.append(total)

                # We want the *average time* from the *best* trial.
                # For more on this methodology, see the docs for
                # Python's `timeit` module.
                #
                # "In a typical case, the lowest value gives a lower bound
                # for how fast your machine can run the given code snippet;
                # higher values in the result vector are typically not
                # caused by variability in Python’s speed, but by other
                # processes interfering with your timing accuracy."
                best = min(trials) / number
                print(
                    "Best of {} trials with {} function"
                    " calls per trial:".format(repeat, number)
                )
                print(
                    "Function `{}` ran in average"
                    " of {:0.3f} seconds.".format(func.__name__, best),
                    end="\n\n",
                    file=file,
                )
            finally:
                if gcold:
                    gc.enable()
            # Result is returned *only once*
            return result

        return _timeit

    # Syntax trick from Python @dataclass
    if _func is None:
        return wrap
    else:
        return wrap(_func)
5/5 - [1 vote]




Simple Fast Kmer counter (with small k) [C++]

28.01.2019     Natir      cpp kmer counter 

  A simple and fast kmer counter.

limitation:
- k must always be odd
- k must be choose at compile time
- the memory usage is 2 ^ (k*2-1) bytes
- in this version the maximum count of a kmer is 255 (see comments)
- in this version the maximum value of k is 31 (see comments)
- forward and reverse kmers are counted at the same time


The main function provides a simple example.
#include <bitset>

template <size_t K>
using kmer_t = typename std::conditional<
  K <= 4 && K % 2 == 1,
  std::uint_least8_t,
  typename std::conditional<
    K <= 8 && K % 2 == 1,
    std::uint16_t,
    typename std::conditional<
      K <= 16 && K % 2 == 1,
      std::uint32_t,
      typename std::conditional<
        K <= 32 && K % 2 == 1,
        std::uint64_t,
        std::false_type
      >::type
    >::type
  >::type
>::type;

template <std::uint8_t K>
constexpr std::uint64_t comp_mask() {
  if(K <= 4 && K % 2 == 1) {
    return 0b10101010;
  } else if(K <= 8 && K % 2 == 1) {
    return 0b1010101010101010;
  } else if(K <= 16 && K % 2 == 1) {
    return 0b10101010101010101010101010101010;
  } else if(K <= 32 && K % 2 == 1) {
    return 0b1010101010101010101010101010101010101010101010101010101010101010;
  } else {
    return 0;
  }
}

template <std::uint8_t K>
constexpr std::uint64_t max_k() {
  if(K <= 4) {
    return 4;
  } else if(K <= 8) {
    return 8;
  } else if(K <= 16) {
    return 16;
  } else if(K <= 32) {
    return 32;
  } else {
    return 0;
  }
}	     

template<std::uint8_t K>
kmer_t<K> seq2bit(std::string seq) {
  kmer_t<K> kmer = 0;
  
  for(auto n: seq) {
    kmer <<= 2;
    kmer |= ((n >> 1) & 0b11);
  }
  
  return kmer;
}

template<std::uint8_t K>
kmer_t<K> reverse_2(kmer_t<K> kmer) {
  kmer_t<K> reversed = 0;

  for(kmer_t<K> i = 0; i < K-1; ++i) {
    reversed = (reversed ^ (kmer & 0b11)) << 2;
    kmer >>= 2;
  }
  
  return reversed ^ (kmer & 0b11);
}

template<std::uint8_t K>
uint8_t parity(kmer_t<K> kmer) {
  return std::bitset<K*2>(kmer).count() % 2; 
}
 
template<std::uint8_t K>
kmer_t<K> get_cannonical(kmer_t<K> kmer) {
  uint8_t cleaning_move = (max_k<K>() - K) * 2;

  if(parity<K>(kmer) == 0) {
    return kmer;
  } else {
    
    kmer_t<K> hash = (reverse_2<K>(kmer) ^ comp_mask<K>());
    hash <<= cleaning_move;
    hash >>= cleaning_move;
        
    return hash;
  }
}

template<std::uint8_t K>
kmer_t<K> hash_kmer(kmer_t<K> kmer) {
  return get_cannonical<K>(kmer) >> 1;
}

#include <iostream>
#include <vector>

int main(int argc, char* argv[]) {

  constexpr std::uint8_t k = 3; // must be odd and minus than 32 change test at line 119

  std::string seq = "AAACCCTTTGGG";

  // To increase maximal count change uint8_t, to unint16_t, uint32_t or uint64_t
  std::vector<std::uint8_t> kmer2count(1 << (k * 2 -1), 0); // 1 << n <-> 2^n

  for(size_t i = 0; i != seq.length() - k + 1; i++) {
    std::string sub_seq = seq.substr(i, k);
    
    uint32_t hash = hash_kmer<k>(seq2bit<k>(sub_seq));

    if(kmer2count[hash] != 255) {
      kmer2count[hash]++;
    }
  }
  
  for(size_t i = 0; i != kmer2count.size(); i++) {
    std::cout<<int(kmer2count[i])<<" ";
  }
  std::cout<<std::endl;
  
  return 0;
}
5/5 - [1 vote]




Conversion d'encodage pour fichier tabulé [Bash]

12.10.2018     Yo_O      utf8 encodage csv excel tableau données cli 

  A l'aide de iconv (https://en.wikipedia.org/wiki/Iconv), vous pouvez maintenant arranger n'importe quel fichier tabulé "mal encodé" (comprendre "pas encodé en UTF-8") venant de vos biologistes préférés en une petite ligne.
iconv -f iso-8859-1 -t utf-8 input_file_in_ISO8859-1.csv > output_file_in_UTF8.csv
5/5 - [1 vote]




From FastQ to Fasta format [Bash]

04.10.2018     mathildefog      bash fastq fasta 

  Transforme un ficher fastq en fichier fasta.
cat myFile.fastq | paste - - - - | sed 's/^@/>/g'| cut -f1-2 | tr '\t' '\n' > myFile.fasta
5/5 - [2 votes]




louvain [Python]

24.09.2018     Mathurin      louvain 

  version de louvain trouvé en python
import numpy as np
import os


def genlouvain(B, seed=None):
    '''
    The optimal community structure is a subdivision of the network into
    nonoverlapping groups of nodes which maximizes the number of within-group
    edges and minimizes the number of between-group edges.
    This function is a fast an accurate multi-iterative generalization of the
    louvain community detection algorithm. This function subsumes and improves
    upon modularity_[louvain,finetune]_[und,dir]() and additionally allows to
    optimize other objective functions (includes built-in Potts Model i
    Hamiltonian, allows for custom objective-function matrices).
    Parameters
    ----------
    B : str | NxN np.arraylike
        string describing objective function type, or provides a custom
        objective-function matrix. builtin values 'modularity' uses Q-metric
        as objective function, or 'potts' uses Potts model Hamiltonian.
        Default value 'modularity'.
    seed : int | None
        random seed. default value=None. if None, seeds from /dev/urandom.
    Returns
    -------
    ci : Nx1 np.array
        final community structure
    q : float
        optimized q-statistic (modularity only)
    '''
    np.random.seed(seed)
    st0 = np.random.get_state()

    n = len(B)
    ci = np.arange(n) + 1
    Mb = ci.copy()

    B = np.squeeze(np.asarray(B))
    B = (B + B.T)/2.0
    Hnm = np.zeros((n, n))
    for m in range(1, n + 1):
        Hnm[:, m - 1] = np.sum(B[:, ci == m], axis=1)  # node to module degree
    H = np.sum(Hnm, axis=1)  # node degree
    Hm = np.sum(Hnm, axis=0)  # module degree

    q0 = -np.inf
    # compute modularity
    q = np.sum(B[np.tile(ci, (n, 1)) == np.tile(ci, (n, 1)).T])

    first_iteration = True

    while q - q0 > 1e-10:
        it = 0
        flag = True
        while flag:
            it += 1
            if it > 1000:
                raise ValueError('Modularity infinite loop style G. '
                                    'Please contact the developer.')
            flag = False
            for u in np.random.permutation(n):
                ma = Mb[u] - 1
                dQ = Hnm[u, :] - Hnm[u, ma] + B[u, u]  # algorithm condition
                dQ[ma] = 0

                max_dq = np.max(dQ)
                if max_dq > 1e-10:
                    flag = True
                    mb = np.argmax(dQ)
                    Hnm[:, mb] += B[:, u]
                    Hnm[:, ma] -= B[:, u]  # change node-to-module strengths

                    Hm[mb] += H[u]
                    Hm[ma] -= H[u]  # change module strengths

                    Mb[u] = mb + 1

        _, Mb = np.unique(Mb, return_inverse=True)
        Mb += 1

        M0 = ci.copy()
        if first_iteration:
            ci = Mb.copy()
            first_iteration = False
        else:
            for u in range(1, n + 1):
                ci[M0 == u] = Mb[u - 1]  # assign new modules

        n = np.max(Mb)
        b1 = np.zeros((n, n))
        for i in range(1, n + 1):
            for j in range(i, n + 1):
                # pool weights of nodes in same module
                bm = np.sum(B[np.ix_(Mb == i, Mb == j)])
                b1[i - 1, j - 1] = bm
                b1[j - 1, i - 1] = bm
        B = b1.copy()

        Mb = np.arange(1, n + 1)
        Hnm = B.copy()
        H = np.sum(B, axis=0)
        Hm = H.copy()

        q0 = q
        q = np.trace(B) # compute modularity
        dir = 'output'
        if not os.path.isdir(dir): os.makedirs(dir)
        
        output = open (os.path.join(dir, 'gen_louvain_seed.txt'),'a+')
        print >> output, "%i" % (st0[1][0])


    return ci, q
0/5 - [0 vote]




Lister les screens ouverts sur des serveurs distants [Bash]

28.08.2018     Kumquatum      tool bashrc screen 

  Affiche le nom de vos screens ouvert sur une liste de serveurs donnés
À ajouter dans le .bashrc pour une utilisation rapide
function screenAllServ ()
{
echo -e "#######################\n### Current screens ###\n#######################"
for servName in Serv1 Serv2 Serv3
do
ssh username@$servName.adresse.de.mon.serveur.com "echo -e \"----\n<><><> \"\$(hostname)\"<><><>\"; ls /var/run/screen/S-username/ | cat"
done
} 
0/5 - [0 vote]




Cartographie R [R]

27.08.2018     Remy_Moine      cartographie points polygones vecteurs SIG 

  Un essai de synthèse des éléments rassemblés pour obtenir une carte de différentes données vecteur sous R.

Il est basé en partie sur les éléments donnés par la R-Graph Gallery (https://www.r-graph-gallery.com/). De plus amples explications sont disponibles sur ce site.

Les données en entrée doivent suivre une projection Mercator (EPSG 3857).

packages additionels: "ggplot2,ggmap, broom, sp, rgdal, maptools, plyr, rgeos"

R version 3.4.4
OS: Linux Xubuntu 16.04
library(ggmap)
library(ggplot2)
library(broom)
library(sp)
library(rgdal)
library(maptools)
library(plyr)
library(rgeos)

# stockage des projections usuelles
l93a<-"+init=epsg:27572"
l93<-"+init=epsg:2154"
wgs84<-"+init=epsg:4326"
merca<-"+init=epsg:3857"

# Chargement d'une couche de polygones représentant des secteurs de prospection
secteurs<-readOGR( dsn= getwd() , layer="secteurs") 
proj4string(secteurs)<-CRS(l93)
# Extraction des vertex du polygone pour être représenté sous ggplot.
secteurs@data$id = rownames(secteurs@data)
secteurs.points = fortify(secteurs,region="id")
secteurs.df = join(secteurs.points, secteurs@data,by="id")

# Création d'un point géolocalisé d'inventaire
# !ATTENTION ! Tout doit être en epsg 3857
centro<-as.data.frame(cbind(lon=c(44.95),lat=c(6.10)))# tableau contenant les points EN MERCATOR (EPSG=3857 !!!)

# Récupération d'un fond cartographique googlemap satellite
# Le fond cartographique couvre la zone spécifiée entre guillemet (donc ici la France). La recherche s'effectue comme dans Googl...aps.
map <- ggmap(get_googlemap("France", zoom = 10, maptype = "satellite"))# zoom à 0-> terre entière, zoom à 20-> champs en détail, 

# Assemblage final
carte<-map +
  geom_point(data=centro,aes(x=lat,y=lon,colour=rownames(centro),shape=rownames(centro)),size=3)+
  xlab("Longitude")+ ylab("Latitude")+
  scale_color_manual(values=c("blue"))+
  theme(legend.position='none')
0/5 - [0 vote]





Ancients snippets >>>
Powered by canSnippet CE - by ademcan