Full
list of
Papers

Full list of Papers

Browse by category to get my papers

PhD Thesis and Habilitation

PhD Thesis

Get a synthesis of more than 15 years of research activities.

Peer reviewed international journals

Papers

Get full digital access to my peer-reviewed papers.

Book chapters

Book chapters

Browse & download my book chapters.

Conference proceedings

Proceedings

Get online access to conference proceedings.

© All the documents are available as pdf documents unless otherwise noted. They are presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Reading a PhD thesis

PhD Thesis

My PhD thesis and my Habilitation qualification (Accreditation to supervise research, "Habilitation à diriger des recherches", abbreviated HDR) are available for download. Both are written in french.

  1. D. Ginhac, "Fast prototyping of parallel image processing applications using functional skeletons",
    PhD thesis defended on January 25th, 1999 at Blaise Pascal University, Clermont Fd, France.
    Show details Hide details pdf (1.8 MB) PhD on Hal

    Abstract: We present SKiPPER, a software dedicated to the fast prototyping of vision algorithms on MIMD/DM platforms. This software is based upon the concept of algorithmic skeletons, i.e. higher order program constructs encapsulating recurring forms of parallel computations and hiding their low-level implementation details. Examples of such skeletons in low- to mid-level image processing include such as geometric decompositions, data or task farming. Each skeleton is given an architecture-independant functional (but executable) specification, a portable implementation as a process template and an analytic performance model. The source program is a purely functional specification of the algorithm in which all parallelism is made explicit be means of composing instances of selected skeletons, each instance taking as parameters user-specific sequential functions written in C. This specification is turned into a process graph in which nodes correspond to sequential functions and/or skeleton control processes and edges to communications. This graph is mapped onto the actual physical topology using a third-party CAD software (SynDEx). The result is a dead-lock free, optimized (but still portable) distributed executive which can be run straightly on the target platform. The initial specification, written in ML language, can also be executed on any sequential platform to check the correctness of the parallel algorithm and to predict performances. In that case, the applicative semantics of skeletons guarantees the equivalence between sequential and parallel results. The applicability of SkiPPER concepts and tools has been assessed by parallelizing several realistic real-time vision applications both on a multi-DSP platform and a network of workstations (connected component labeling, road tracking algorithm based upon marking detection). This experiment showed a dramatic reduction in development times (hence the term fast prototyping) with measured performances staying on the on the par with those obtained with hand-crafted parallel versions.

    Bibtex Reference:

    @phdthesis{php:ginhac:1999,
    	author = {D. Ginhac},
    	title = {Fast prototyping of parallel image processing applications using functional skeletons},
    	year = {1999},
    	school = {Blaise Pascal University},
    	URL = {http://tel.archives-ouvertes.fr/tel-00550828/fr/},
    }
    			

  2. D. Ginhac, "Adéquation Algorithme Architecture - Aspects logiciels, matériels et cognitifs",
    Habilitation qualification defended on Décember 8th, 2008 at University of Burgundy, Dijon, France.
    Show details Hide details pdf (14 MB) Habilitation on Hal

    Abstract: Les travaux présentés dans le cadre de cette Habilitation à Diriger des Recherches s’inscrivent principalement dans la problématique dite d’« Adéquation Algorithme Architecture » (A3) de la section 61 du CNU « Génie informatique, automatique et traitement du signal ». Ils ont pour objectif commun la mise en œuvre de systèmes matériels et logiciels dédiés à la vision artificielle à fortes contraintes temporelles. Ces systèmes matériels reposent principalement sur des capteurs d’images spécifiquement développés en technologie CMOS et interfacés avec des structures de calculs plus traditionnelles telles que des FPGA ou des DSP.

    Ces travaux de recherche se focalisent sur différents aspects cruciaux tels que l’acquisition d’images par des capteurs dédiés, le développement et la programmation d’architectures optimisées de traitement des images et l'implantation d'algorithmes de traitement du signal et d'images en temps réel sur ces architectures. Ces systèmes matériels de vision doivent répondre à des contraintes cruciales telles que limitation du bruit des capteurs, débit et accès rapide à l'information, rapidité de traitement, embarquabilité, facilité de programmation des applications, ... Satisfaire de telles exigences impose inévitablement la mise en place d’activités de modélisation, de développement et de conception de capteurs et d’architectures complètement dédiés. De fait, une approche « Adéquation Algorithme Architecture » est abordée à tous les niveaux hiérarchiques, allant du niveau le plus fin lors de la conception microélectronique de capteurs ou de circuits dédiés jusqu’au niveau le plus élevé avec le développement d’outils logiciels de prototypage rapide d’applications de traitement d’images. L'originalité de ces travaux est de confirmer l’approche théorique par une approche expérimentale systématique à travers la réalisation de démonstrateurs à base de systèmes électroniques et informatiques sur lesquels s’exécutent des applications complexes de traitement d’images. P lus particulièrement, les applications visées concernent le domaine des Sciences et Technologies de l'Information comme la reconnaissance de visages en temps réel, les Sciences de la Vie et de la Santé comme les dispositifs de mesure temps réel de pression pour les maladies veineuses ainsi que les Sciences de l’Ingénieur comme le contrôle qualité en temps réel de produits manufacturés.

    Bibtex Reference:

    @phdthesis{php:ginhac:2008,
    	author = {D. Ginhac},
    	title = {Adéquation Algorithme Architecture - Aspects logiciels, matériels et cognitifs},
    	year = {2008},
    	school = {University of Burgundy},
    	URL = {http://tel.archives-ouvertes.fr/tel-00550828/fr/},
    	type= {Habilitation à Diriger des Recherches},
    }
    			

Peer-reviewed publications

Below you can find my publications that can be accessed and downloaded for personal use. All publications are available as pdf documents unless otherwise noted.

You did not find a specific paper ? Just Ask for it.

Reading a publication on a iPad
  1. F. Lebrun, R. Terrier, D. Prêle, D. Pellion, D. Ginhac, R. Chipaux, E. Bréelle, P. Laurent, J-P. Baronick, A. Noury, C. Buy and C. Olivetto, “The Gamma Cube: a new way to explore the gamma- ray sky”, Proceedings of Science – PoS(INTEGRAL2014), In Press, 2015.
    Show details Hide details pdf (2.6 MB) In Press

    Abstract: We propose a new concept to allow the tracking of electrons in a gamma-ray telescope operating in the 5–100 MeV band. The idea of this experiment is to image the ionizing tracks that charged particles produce in a scintillator. It is a pair creation telescope at high energy and a Compton telescope with electron tracking at low energy. The telescope features a large scintillator transparent to the scintillation light, an ad-hoc optical system and a high resolution and highly sensitive imager. The performance perspectives and the advantages of such a system are outstanding but the technical difficulties are serious. A few years of research and development within the scientific community are required to reach the TRL level appropriate to propose the Gamma Cube in response to a flight opportunity.

    Keywords:

    Bibtex Reference:

    @article {pos2015:,
    	author = {Lebrun, François and Terrier, Régis and Prêle, Damien and Pellion, Denis and Ginhac, Dominique and Chipaux, Remi and Bréelle, Eric and Laurent, Philippe and Baronick, Jean-Pierre and Noury, Alexis and Buy, Christelle and Olivetto, Christian},
    	title = {The Gamma Cube: a new way to explore the gamma- ray sky},
    	journal = {Proceedings of Science – PoS(INTEGRAL2014)},
    	pages = {},
    	volume = {},
    	issue = {},
    	issn = {},
    	url = {},
    	month = {},
    	year = {2015}}
    				

  2. D. Pellion, K. Jradi, N. Brochard, D. Prêle and D. Ginhac, “Single-Photon Avalanche Diodes (SPAD) in CMOS 0.35µm technology”, Nuclear Instruments and Methods in Physics Research Section A , In Press, 2015.
    Show details Hide details pdf (2.6 MB) In Press

    Abstract: Some decades ago single photon detection used to be the terrain of photomultiplier tube (PMT), thanks to its characteristics of sensitivity and speed. However, PMT has several disadvantages such as low quantum efficiency, overall dimensions, and cost, making them unsuitable for compact design of integrated systems. So, the past decade has seen a dramatic increase in interest in new integrated single-photon detectors called Single-Photon Avalanche Diodes (SPAD) or Geiger-mode APD. SPAD are working in avalanche mode above the breakdown level. When an incident photon is captured, a very fast avalanche is triggered, generating an easily detectable current pulse. This paper discusses SPAD detectors fabricated in a standard CMOS technology featuring both single-photon sensitivity, and excellent timing resolution, while guaranteeing a high integration. In this work, we investigate the design of SPAD detectors using the AMS 0.35µm CMOS Opto technology. Indeed, such standard CMOS technology allows producing large surface (few mm2) of single photon sensitive detectors. Moreover, SPAD in CMOS technologies could be associated to electronic readout such as active quenching, digital to analog converter, memories and any specific processing required to build efficient calorimeters (Silicon PhotoMultiplier - SiPM) or high resolution imagers (SPAD imager). The present work investigates SPAD geometry. MOS transistor has been used instead of resistor to adjust the quenching resistance and find optimum value. From this first set of results, a detailed study of the Dark Count Rate (DCR) has been conducted. Our results show a dark count rate increase with the size of the photodiodes and the temperature (at T=22.5°C, the DCR of a 10 µm-photodiode is 2020 count.s-1 while it is 270 count.s-1 at T=-40°C for a overvoltage of 800 mV). A small pixel size is desirable, because the DCR per unit area decreases with the pixel size. We also found that the adjustment of overvoltage is very sensitive and depends on the temperature. The temperature will be adjusted for the subsequent experiments.

    Keywords: Photodetectors, Avalanche photodiodes (APDs); Optoelectronics, Photonic integrated circuit, Integrated optoelectronic circuits

    Bibtex Reference:

    @article {nima2015:,
    	author = {Pellion, Denis and Jradi, Khalil and Brochard, Nicolas and Prele, Damien and Ginhac, Dominique},
    	title = {Single-Photon Avalanche Diodes (SPAD) in CMOS 0.35µm technology},
    	journal = {Nuclear Instruments and Methods in Physics Research Section A},
    	pages = {},
    	volume = {},
    	issue = {},
    	issn = {},
    	url = {},
    	month = {},
    	year = {2015}}
    				

  3. M. Assaad, M. Mohsen, D. Ginhac and F. Mériaudeau, “A 3-Bit Pseudo Flash ADC based Low-Power CMOS Interface Circuit Design for Optical Sensor”, Journal of Low Power Electronics , 11(1), 1-10 2015.
    Show details Hide details pdf (3.1 MB) In Press

    Abstract: The paper presents a CMOS interface circuit design for optical pH measurement that can produce an 8-bit digital output representing the color information (i.e. wavelength, l). In this work we are focusing at reducing the component count by design and hence reducing the cost and silicon area. While it could be further optimized for lower power consumption, the proposed design has been implemented using standard cells provided by the foundry (i.e. AMS 0.35 μm CMOS) for proof of concept. The biasing current and power consumption of the fabricated chip are measured at 11 μA and 37 μW respectively using 3.3 V supply voltage. Experimental results have further validated the proposed design concept. The number of detectable colors is eight and can be extended to a higher number without any major change in the architecture.

    Keywords: CMOS, Interface Circuit, Optical Sensor, Low Power

    Bibtex Reference:

    @article {jolpe2015:,
    	author = {Assaad, Maher and Mohsen, Mousa and Ginhac, Dominique and Mériaudeau, Fabrice},
    	title = {A 3-Bit Pseudo Flash ADC based Low-Power CMOS Interface Circuit Design for Optical Sensor},
    	journal = {Journal of Low Power Electronics},
    	pages = {1-10},
    	volume = {11},
    	issue = {1},
    	issn = {},
    	url = {},
    	month = {March},
    	year = {2015}}
    				

  4. K. Jradi, D. Pellion, and D. Ginhac, “Design, Characterization and Analysis of a 0.35µM CMOS Single Photon Avalanche Diode”, Sensors, 14(12), 22773-22784, 2014.
    Show details Hide details pdf (2.8 MB) Original paper on MDPI

    Abstract: Most of the works about single-photon detectors rely on Single Photon Avalanche Diodes (SPADs) designed with dedicated technological processes in order to achieve single-photon sensitivity and excellent timing resolution. Instead, this paper focuses on the implementation of high-performance SPADs detectors manufactured in a standard 0.35-micron opto CMOS technology provided by AMS. We propose a series of low-noise SPADs designed with a variable pitch from 20 µm down to 5 µm. This opens the further way to the integration of large arrays of optimized SPAD pixels with pitch of a few micrometers in order to provide high-resolution single-photon imagers. We experimentally demonstrate that a 20-micron SPAD appears as the most relevant detector in terms of Signal-to-Noise ratio, enabling emergence of large arrays of SPAD.

    Keywords: Photodetectors, Avalanche photodiodes (APDs); Optoelectronics, Photonic integrated circuit, Integrated optoelectronic circuits

    Bibtex Reference:

    @article {sensors:10.3390/s141222773,
    	author = {Jradi, Khalil and Pellion, Denis, and Ginhac, Dominique},
    	title = {Design, Characterization and Analysis of a 0.35µM CMOS Single Photon Avalanche Diode},
    	journal = {Sensors},
    	pages = {22773-22784},
    	volume = {14},
    	issue = {12},
    	url = {http://www.mdpi.com/1424-8220/14/12/22773},
    	month = {December},
    	year = {2014}}
    				

  5. P.J. Lapray, B. Heyrman, and D. Ginhac, “Hardware-based smart camera for recovering high dynamic range video from multiple exposures”, Optical Engineering, 53(10), 102110 (Sep 23, 2014).
    Show details Hide details pdf (4.8 MB) Original paper on Spie Digital Library

    Abstract: In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

    Keywords: Cameras, Computer hardware, High dynamic range imaging, Video, Data storage

    Bibtex Reference:

    @article{spie:10.1117/1.OE.53.10.102110, 
    	title={Hardware-based smart camera for recovering high dynamic range video from multiple exposures},
    	author={Lapray, Pierre-Jean and Heyrman, Barthelemy and Ginhac, Dominique},
    	journal={Optical Engineering},
    	volume = {53},
    	issue = {10},
    	publisher={SPIE},
    	isbn={0091-3286},
    	doi={10.1007/s11554-013-0393-7},
    	url={http://dx.doi.org/10.1007/s11554-013-0393-7},
    	keywords={Smart camera; High dynamic range, memory management core; Parallel processing; FPGA implementation},
    	pages={1-10},
    	year={2014}
    	}		
    	

  6. M. Assaad, I. Yohannes, A. Bermak, D. Ginhac and F. Meriaudeau, “Design and Characterization of Automated Color Sensors System”, International Journal on Smart Sensing and Intelligent Systems, 7(1), pp. 1-12, 2014.
    Show details Hide details pdf (843 KB) Original paper on S2IS

    Abstract: The paper presents a color sensor system that can process light reflected from a surface and produce a digital output representing the color of the surface. The end-user interface circuit requires only a 3-bit pseudo flash analog-to-digital converter (ADC) in place of the conventional/typical design comprising ADC, digital signal processor and memory. For scalability and compactness, the ADC was designed such that only two comparators were required regardless of the number of color/wavelength to be identified. The complete system design has been implemented in hardware (bread board) and fully characterized. The ADC achieved less than 0.1 LSB for both INL and DNL. The experimental results also demonstrate that the color sensor system is working as intended at 20 kHz while maintaining greater than 2.5 ENOB by the ADC. This work proved the design concept and the system will be realized with integrated circuit technology in future to improve its operating frequency.

    Keywords: Color sensor, light sensor, analog to digital converter (ADC), flash ADC

    Bibtex Reference:

    @article {s2is:2014,
    	author = {Assaad, Maher and Yohannes, Israel and Bermak, Amine, and Ginhac, Dominique and Meriaudeau, Fabrice},
    	title = {Design and Characterization of Automated Color Sensors System},
    	journal = {International Journal on Smart Sensing and Intelligent Systems},
    	pages = {1-12},
    	volume = {7},
    	issue = {1},
    	url = {http://www.s2is.org/Issues/v7/n1/},
    	month = {March},
    	year = {2014}}
    				

  7. P.J. Lapray, B. Heyrman, and D. Ginhac, “HDR-ARtiSt: an adaptive real-time smart camera for high dynamic range imaging”, Journal of Real-Time Image Processing, 7(1), pp. 1-12, 2014.
    Show details Hide details pdf (961 KB) Original paper on SpringerLink

    Abstract: This paper describes a complete FPGA-based smart camera architecture named HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) which produces a real-time high dynamic range (HDR) live video stream from multiple captures. A specific memory management unit has been defined to adjust the number of acquisitions to improve HDR quality. This smart camera is built around a standard B&W CMOS image sensor and a Xilinx FPGA. It embeds multiple captures, HDR processing, data display and transfer, which is an original contribution compared to the state-of-the-art. The proposed architecture enables a real-time HDR video flow for a full sensor resolution (1.3 Mega pixels) at 60 frames per second.

    Keywords: Smart camera, High dynamic range, memory management core, Parallel processing, FPGA implementation

    Bibtex Reference:

    @article{springerlink:10.1007/s11554-013-0393-7, 
    	title={HDR-ARtiSt: an adaptive real-time smart camera for high dynamic range imaging},
    	author={Lapray, Pierre-Jean and Heyrman, Barthelemy and Ginhac, Dominique},
    	journal={Journal of Real-Time Image Processing},
    	publisher={Springer Berlin Heidelberg},
    	issn={1861-8200},
    	doi={10.1007/s11554-013-0393-7},
    	url={http://dx.doi.org/10.1007/s11554-013-0393-7},
    	keywords={Smart camera; High dynamic range, memory management core; Parallel processing; FPGA implementation},
    	pages={1-16},
    	year={2014}
    	}		
    	

  8. T. Tockzek, F. Hamdi, B. Heyrman, J. Dubois, J. Miteran, and D. Ginhac, "Scene-based non-uniformity correction: from algorithm to implementation on a smart camera", Journal of Systems Architecture, 59 (10), pp. 833-846, 2013.
    Show details Hide details pdf (2.3 MB) Original paper on ScienceDirect

    Abstract: Raw output data from image sensors tends to exhibit a form of bias due to slight on-die variations between photodetectors, as well as between amplifiers. The resulting bias, called fixed pattern noise (FPN), is often corrected by subtracting its value, estimated through calibration, from the sensor’s raw signal. This paper introduces an on-line scene-based technique for an improved fixed-pattern noise compensation which does not rely on calibration, and hence is more robust to the dynamic changes in the FPN which may occur slowly over time. This article first gives a quick summary of existing FPN correction methods and explains how our approach relates to them. Three different pipeline architectures for realtime implementation on a FPGA-based smart camera are then discussed. For each of them, FPGA implementations details, performance and hardware costs are provided. Experimental results on a set of seven different scenes are also depicted showing that the proposed correction chain induces little additional resource use while guarantying high quality images on a wide variety of scenes.

    Keywords: Fixed spatial noise, Non-uniformity correction, FPGA-based smart camera, Real-time implementation

    Bibtex Reference:

    @article{ScienceDirect:j.sysarc.2013.05.017,
    	title = "Scene-based non-uniformity correction: From algorithm to implementation on a smart camera ",
    	author = "T. Toczek and F. Hamdi and B. Heyrman and J. Dubois and J. Miteran and D. Ginhac",
    	journal = "Journal of Systems Architecture ",
    	issn = "1383-7621",
    	volume = "59",
    	number = "10, Part A",
    	pages = "833 - 846",
    	year = "2013",
    	doi = "http://dx.doi.org/10.1016/j.sysarc.2013.05.017",
    	url = "http://www.sciencedirect.com/science/article/pii/S1383762113000982"
    }
    				

  9. W. Elhamzi, J. Dubois, J. Miteran, M. Atri, B. Heyrman, and D. Ginhac, "Efficient smart- camera accelerator: a configurable motion estimator dedicated to video codec", Journal of Systems Architecture, 59 (10), pp. 870-877, 2013.
    Show details Hide details pdf (1 MB) Original paper on ScienceDirect

    Abstract: Smart cameras are used in a large range of applications. Usually the smart cameras transmit the video or/and extracted information from the video scene, frequently on compressed format to fit with the application requirements. An efficient hardware accelerator that can be adapted and provide the required coding performances according to the events detected in the video, the available network bandwidth or user requirements, is therefore a key element for smart camera solutions. We propose in this paper to focus on a key part of the compression system: motion estimation. We have developed a flexible hardware implementation of the motion estimator based on FPGA component, fully compatible with H.264, which enables the integer motion search, the fractional search and variable block size to be selected and adjusted. The main contributions of this paper are the definition of an architecture allowing flexibility and some new hardware optimizations of the architecture of the motion estimation allowing the improvement of the performances (computing time or hardware resources) compared to the state of the art. The paper describes the design and proposes a comparison with state-of-art architectures. The obtained FPGA based architecture can process integer motion estimation on 720×576 video streams at 67 fps using full search strategy, and sub-pel refinement up to 650 KMacroblocks/s.

    Keywords: Configurable motion estimation, Smart camera accelerator, Fractional Motion Estimation, FPGA

    Bibtex Reference:

    @article{ScienceDirect:j.sysarc.2013.05.005,
    	title = "Efficient smart-camera accelerator: A configurable motion estimator dedicated to video codec ",
    	author = "Wajdi Elhamzi and Julien Dubois and Johel Miteran and Mohamed Atri and Barthelemy Heyrman and Dominique Ginhac",
    	journal = "Journal of Systems Architecture ",
    	volume = "59",
    	number = "10, Part A",
    	pages = "870 - 877",
    	year = "2013",
    	issn = "1383-7621",
    	doi = "http://dx.doi.org/10.1016/j.sysarc.2013.05.005",
    	url = "http://www.sciencedirect.com/science/article/pii/S1383762113000726"
    }
    				

  10. K. Jradi, D. Pellion, and D. Ginhac, "Multi-pixel Geiger mode imager for medical application", Proceedings of Science – PoS(PhotoDet2012), 055, pp. 1-5, 2013.
    Show details Hide details pdf (1 MB) Original paper on Hal

    Abstract: Nowadays, there are two types of sensors to detect the low luminous flux, PMT (Photomultiplier Tube) and Geiger-APD (Geiger Avalanche Photodiode). The domain of Geiger-APD has reached an advanced development in the last years. The basic idea of this structure consists in polarizing an APD in Geiger-mode by applying a voltage beyond its breakdown voltage. In this case of polarization, the APD is working in a special mode and is able to detect the single photon. The theory of detection of single photon using this detector has been invented in the beginning of 90's and developed for detection of low light intensity. By using this kind of photodiode in the Geiger-mode, we designed a new type of detector and several applications have been explored. In astrophysics, we can use this detector for the detection of cosmic rays (through the detection of Cerenkov light generated by atmospheric showers). Another application is also possible and reveals an important method for detection of cancer cells.

    Keywords: Photon counting, SPAD, CMOS

    Bibtex Reference:

    @article{Hal:jradi:hal-00806694,
    
    	title = {{Multi-pixel Geiger mode imager for medical application}},
    	author = {Jradi, Khalil and Pellion, Denis and Ginhac, Dominique},
    	affiliation = {Laboratoire Electronique, Informatique et Image - Le2i},
    	journal = {PoS - Proceedings of Science},
        url = {http://hal.archives-ouvertes.fr/hal-00806694},
        year = {2013},
        pages = {1-5},
        url = {http://pos.sissa.it/cgi-bin/reader/conf.cgi?confid=158},
    }
    				

  11. P. Tahej, C. Ferrel-Chapus, I. Olivier, D. Ginhac, and J.-P. Rolland,, “Multiple representations and mechanisms for visuomotor adaptation in young children”, Human Movement Science, 31(6), pp. 1425-1435, 2012.
    Show details Hide details pdf (757 KB) Original paper on Science Direct

    Abstract: In this study, we utilized transformed spatial mappings to perturb visuomotor integration in 5-yr-old children and adults. The participants were asked to perform pointing movements under five different conditions of visuomotor rotation (from 0° to 180°), which were designed to reveal explicit vs. implicit representations as well as the mechanisms underlying the visual-motor mapping. Several tests allowed us to separately evaluate sensorimotor (i.e., the dynamic dimension of movement) and cognitive (i.e., the explicit representations of target position and the strategies used by the participants) representations of visuo-proprioceptive distortion. Our results indicate that children do not establish representations in the same manner as adults and that children exhibit multiple visuomotor representations. Sensorimotor representations were relatively precise, presumably due to the recovery of proprioceptive information and efferent copy. Furthermore, a bidirectional mechanism was used to re-map visual and motor spaces. In contrast, cognitive representations were supplied with visual information and followed a unidirectional visual-motor mapping. Therefore, it appears that sensorimotor mechanisms develop before the use of explicit strategies during development, and young children showed impaired visuomotor adaptation when confronted with large distortions.

    Keywords: Cognitive & perceptual development, Visuomotor adaptation, Motor development, Sensorimotor, Proprioception

    Bibtex Reference:

    @article{ScienceDirect:10.1016/j.humov.2012.02.016,
    	title = "Multiple representations and mechanisms for visuomotor adaptation in young children ",
    	journal = "Human Movement Science ",
    	author = "Pierre-Karim Tahej and Carole Ferrel-Chapus and Isabelle Olivier and Dominique Ginhac and Jean-Pierre Rolland",
    	volume = "31",
    	number = "6",
    	pages = "1425 - 1435",
    	year = "2012",
    	issn = "0167-9457",
    	doi = "http://dx.doi.org/10.1016/j.humov.2012.02.016",
    	url = "http://www.sciencedirect.com/science/article/pii/S016794571200053X"
    }
    				

  12. S. Chambaron, B. Berbérian, D. Ginhac, L. Delbeque and A. Cleeremans, "Action, observation and mental imagery: Can one implicitly learn in all cases?", Année Psychologique, 110 (3), 351-364, 2010
    Show details Hide details pdf (2.9 MB) Original paper on NecPlus

    Abstract: This experiment is aimed to study if an implicit learning can take place when participants practice, observe or mentally imagine regular displacements of a target in a Serial Reaction Time task (SRT). The results indicate that participants who really practice the task, and participants who realized motor imagery and also participants who made observation, learned the sequence, contrary to the participants who only made visual imagery. Indeed, times on target of these participants decrease significantly when a new sequence is introduced, which is not the case for the participants of the visual imagery condition. However, the results obtained with the recognition test do not enable us to definitively conclude about the nature of the learning (implicit vs. explicit). Finally, this study highlights that the motor imagery condition gives performance similar to those obtained in practice and in observation conditions, which represents an interesting contribution in the field of the psychology of the sport and cognitive psychology.

    Bibtex Reference:

    @article{CambridgeJournals:2431600,
    	author = {Chambaron, Stéphanie and Berberian, Bruno and Ginhac, Dominique and Delbecque, Laure and Cleeremans, Axel},
    	title = {Action, observation and mental imagery: Can one implicitly learn in all cases?},
    	journal = {L'Année psychologique},
    	volume = {110},
    	number = {03},
    	pages = {351-364},
    	year = {2010},
    	doi = {10.4074/S0003503310003027},
    	URL = {http://dx.doi.org/10.4074/S0003503310003027},
    }
    				

  13. D. Ginhac, J. Dubois, M. Paindavoine, and B. Heyrman, "A high speed programmable focal-plane SIMD vision chip", Analog Integrated Circuits and Signal Processing – Special Issue on Traitement Analogique de l'Information, du Signal et ses Applications TAISA, 65 (3), pp. 389-398, 2010.
    Show details Hide details pdf (769 KB) Original paper on SpringerLink

    Abstract: A high speed analog VLSI image acquisition and low-level image processing system is presented. The architecture of the chip is based on a dynamically reconfigurable SIMD processor array. The chip features a massively parallel architecture enabling the computation of programmable mask-based image processing in each pixel. Each pixel include a photodiode, an amplifier, two storage capacitors, and an analog arithmetic unit based on a four-quadrant multiplier architecture. A 64x64 pixel proof-of-concept chip was fabricated in a 0.35 µm standard CMOS process, with a pixel size of 3 5µm x 35 µm. The chip can capture raw images up to 10000~frames per second and runs low-level image processing at a framerate of 2000 to 5000~frames per second.

    Keywords: CMOS Image Sensor, Parallel architecture, SIMD, High-speed image processing, Analog arithmetic unit.

    Bibtex Reference:

    @article {springerlink:10.1007/s10470-009-9325-7,
    	author = {Ginhac, Dominique and Dubois, Jérôme and Heyrman, Barthélémy and Paindavoine, Michel},
    	affiliation = {LE2I—Université de Bourgogne, Aile des Sciences de l’Ingénieur, BP 47870, 21078 Dijon Cedex, France},
    	title = {A high speed programmable focal-plane SIMD vision chip},
    	journal = {Analog Integrated Circuits and Signal Processing},
    	publisher = {Springer Netherlands},
    	issn = {0925-1030},
    	keyword = {Engineering},
    	pages = {389-398},
    	volume = {65},
    	issue = {3},
    	url = {http://dx.doi.org/10.1007/s10470-009-9325-7},
    	note = {10.1007/s10470-009-9325-7},
    	year = {2010}}
    				

  14. D. Ginhac, J. Dubois, M. Paindavoine, and B. Heyrman, "A SIMD Programmable Vision Chip with High Speed Focal Plane Image Processing", EURASIP Journal of Embedded Systems – Special Issue on Design and Architectures for Signal and Image Processing, Article ID 961315, 13 pages, Jan 2009.
    Show details Hide details pdf (3.3 MB) Original paper on SpringerOpen

    Abstract: A high-speed analog VLSI image acquisition and low-level image processing system are presented. The architecture of the chip is based on a dynamically reconfigurable SIMD processor array. The chip features a massively parallel architecture enabling the computation of programmable mask-based image processing in each pixel. Extraction of spatial gradients and convolutions such as Sobel operators are implemented on the circuit. Each pixel includes a photodiode, an amplifier, two storage capacitors, a nd an analog arithmetic unit based on a four-quadrant multiplier architecture. A 64 × 64 pixel proof-of-concept chip was fabricated in a 0.35 µm standard CMOS process, with a pixel size of 35 µm × 35 µm. A dedicated embedded platform including FPGA and ADCs has also been designed to evaluate the vision chip. The chip can capture raw images up to 10000 frames per second and runs low-level image processing at a framerate of 2000 to 5000 frames per second.

    Bibtex Reference:

    @Article{eurasip:10.1155/2008/961315,
    	AUTHOR = {Ginhac, Dominique and Dubois, Jerome and Paindavoine, Michel and Heyrman, Barthelemy},
    	TITLE = {An SIMD Programmable Vision Chip with High-Speed Focal Plane Image Processing},
    	JOURNAL = {EURASIP Journal on Embedded Systems},
    	VOLUME = {2008},
    	YEAR = {2009},
    	NUMBER = {1},
    	PAGES = {961315},
    	URL = {http://jes.eurasipjournals.com/content/2008/961315},
    	DOI = {10.1155/2008/961315},
    	ISSN = {1687-3963},}
    				

  15. Pile of publications J. Dubois, D. Ginhac, M. Paindavoine, and B. Heyrman, "A 10 000 frames/s CMOS Image Sensor with Multi-Processing Pixel", IEEE Journal of Solid-State Circuits, 43(3), 706-717, 2008.
    Show details Hide details pdf (4.4 MB) Original paper on IEEE

    Abstract: A high-speed analog VLSI image acquisition and pre- processing system has been designed and fabricated in a 0.35 µm standard CMOS process. The chip features a massively parallel architecture enabling the computation of programmable low-level image processing in each pixel. Extraction of spatial gradients and convolutions such as Sobel or Laplacian filters are implemented on the circuit. For this purpose, each 35µm x 35 µm pixel includes a photodiode, an amplifier, two storage capacitors, and an analog arithmetic unit based on a four-quadrant multiplier architecture. The retina provides address-event coded output on three asynchronous buses: one output dedicated to the gradient and the other two to the pixel values. A 64 x 64 pixel proof-of-concept chip was fabricated. A dedicated embedded platform including FPGA and ADCs has also been designed to evaluate the vision chip. Measured results show that the proposed sensor successfully captures raw images up to 10000 frames per second and runs low-level image processing at a frame rate of 2000 to 5000 frames per second.

    Keywords: CMOS image sensor, parallel architecture, high-speed image processing, analog arithmetic unit.

    Bibtex Reference:

    @article{ieee-10.1109/JSSC.2007.916618, 
    	author={Dubois, J. and Ginhac, D. and Paindavoine, M. and Heyrman, B.}, 
    	journal={Solid-State Circuits, IEEE Journal of}, 
    	title={A 10 000 fps CMOS Sensor With Massively Parallel Image Processing}, 
    	year={2008}, 
    	month={march }, 
    	volume={43}, 
    	number={3}, 
    	pages={706 -717}, 
    	keywords={ADC;CMOS sensor;FPGA;Laplacian filters;Sobel filters;address-event coded output;analog arithmetic unit;asynchronous buses;convolution;four-quadrant multiplier architecture;high-speed analog VLSI image acquisition;massively parallel image processing;photodiode;picture size 64 pixel;programmable low-level image processing;proof-of-concept chip;size 0.35 mum;spatial gradients extraction;storage capacitors;vision chip evaluation;CMOS image sensors;VLSI;analogue-digital conversion;capacitor storage;field programmable gate arrays;image processing;parallel architectures;photodiodes;}, 
    	doi={10.1109/JSSC.2007.916618}, 
    	ISSN={0018-9200},}
    				

  16. S. Morfu, P. Marquié, B. Nofiélé, and D. Ginhac. "Nonlinear systems for Image Processing." Advances in Imaging and Electron Physics, Elsevier, 152 : 79-153, 2008.
    Show details Hide details pdf (3.1 MB) Original paper on Sience Direct

    Abstract: Not available.

    Bibtex Reference:

    @incollection{ISI:000260166100003,
    	title = "Chapter 3 Nonlinear Systems for Image Processing",
    	editor = "Peter W. Hawkes",
    	booktitle = "Advances in IMAGING AND ELECTRON PHYSICS",
    	publisher = "Elsevier",
    	year = "2008",
    	volume = "152",
    	pages = "79 - 151",
    	series = "Advances in Imaging and Electron Physics",
    	issn = "1076-5670",
    	doi = "10.1016/S1076-5670(08)00603-4",
    	url = "http://www.sciencedirect.com/science/article/pii/S1076567008006034",
    	author = "Saverio Morfu and Patrick Marquie and Brice Nofiele and Dominique Ginhac"
    	}
    
    				

  17. S. Chambaron, D. Ginhac, and P. Perruchet. "Methological issues and computational software dedicated to SRT tasks". Behavior Research Methods, 40(2): 493-502, 2008
    Show details Hide details pdf (812 KB) Original paper on SpringerLink

    Abstract: Serial reaction time tasks and, more generally, the visual–motor sequential paradigms are increasingly popular tools in a variety of research domains, from studies on implicit learning in laboratory contexts to the assessment of residual learning capabilities of patients in clinical settings. A consequence of this success, however, is the increased variability in paradigms and the difficulty inherent in respecting the methodological principles that two decades of experimental investigations have made more and more stringent. The purpose of the present article is to address those problems. We present a user-friendly application that simplifies running classical experiments, but is flexible enough to permit a broad range of nonstandard manipulations for more specific objectives. Basic methodological guidelines are also provided, as are suggestions for using the software to explore unconventional directions of research. The most recent version of gSRT-Soft may be obtained for free by contacting the authors.

    Bibtex Reference:

    @article {springerlink:10.3758/BRM.40.2.493,
       author = {Chambaron, Stéphanie and Ginhac, Dominique and Perruchet, Pierre},
       affiliation = {Cognitive Science Research Unit, Université Libre de Bruxelles, Av. F. D. Roosevelt, 50, CP 191, 1050 Brussels, Belgium},
       title = {gSRT-Soft: A generic software application and some methodological guidelines to investigate implicit learning through visual-motor sequential tasks},
       journal = {Behavior Research Methods},
       publisher = {Springer New York},
       issn = {1554-351X},
       keyword = {Psychology},
       pages = {493-502},
       volume = {40},
       issue = {2},
       url = {http://dx.doi.org/10.3758/BRM.40.2.493},
       note = {10.3758/BRM.40.2.493},
       year = {2008}
    }
    				

  18. S. Chambaron, D. Ginhac, and P. Perruchet., "Is Learning in SRT Tasks Robust Across Methodological Variations?", Année Psychologique, 108(3), 465-486, 2008
    Show details Hide details pdf (745 KB) Original paper on NecPlus

    Abstract: In a previous study, we have shown (Chambaron, Ginhac, Ferrel-Chapus & Perruchet, 2006) that it was very difficult to draw benefit from a repetition in a continuous tracking task. Such results contrast with the apparent facility according to which it is possible to obtain such learning in SRT tasks. How can this discrepancy be explained? Is Learning in SRT tasks dependent of specific design used? We have modified a traditional SRT task in order to make this discrete task as similar as possible to a continuous task: 1) by mixing a repeated sequence between random sequences, 2) by using a computer mouse, 3) by adding a precision constraint, and 4) by making the displacement of the target autonomous and continuous. The goal was to investigate whether the implicit learning continued to appear with such modifications. The results show that implicit learning remains despite this major procedural variations. Our experiments represent a contribution of new procedures and open to a large array of future manipulations.

    Bibtex Reference:

    @article{CambridgeJournals:2423284,
    	author = {Chambaron,Stéphanie and Ginhac,Dominique and Perruchet,Pierre},
    	title = {Is Learning in SRT Tasks Robust Across Methodological Variations?},
    	journal = {L'Année psychologique},
    	volume = {108},
    	number = {03},
    	pages = {465-486},
    	year = {2008},
    	doi = {10.4074/S0003503308003035},
    	URL = {http://dx.doi.org/10.4074/S0003503308003035},
    }
    				

  19. S. Chambaron, D. Destrebecqz, D. Ginhac, and A. Cleeremans. "Influence of Response-Stimulus Interval (RSI) on sequence learning." International journal of psychology, 43(3-4), 258, 2008
    Show details Hide details Not available Original paper on Taylor Francis

    Abstract: We investigated the role of response-stimulus intervals (RSI) on sequence learning in serial reaction time tasks. We assumed that random RSIs would disturb chunk formation and have detrimental effects on learning. In two experiments, we compared sequence learning using random and constant RSIs. Moreover, the RSI average values could be either short (Exp. 1) or long (Exp. 2). Our results reveal that (1) random RSIs had non impact on SRT performance; (2) recognition of sequence fragments was only observed with constant RSIs. It is therefore argued that sequence temporal organization is mandatory for explicit sequence learning to take place.

    Bibtex Reference:

    @article{ ISI:000259264303031,
    	Author = {Chambaron, Stephanie and Destrobacqz, Arnaud and Ginhac, Dominique and Cleeremans, Axel},
    	Title = {{Influence of response-stimulus interval (RSI) on sequence learning}},
    	Journal = {{INTERNATIONAL JOURNAL OF PSYCHOLOGY}},
    	Year = {{2008}},
    	Volume = {{43}},
    	Number = {{3-4}},
    	Pages = {{258}},
    	Month = {{JUN-AUG}},
    	ISSN = {{0020-7594}},
    	Unique-ID = {{ISI:000259264303031}}
    	url={http://dx.doi.org/10.1080/00207594.2008.10108484},
    	doi={10.1080/00207594.2008.10108484},
    	}
    				

  20. S. Chambaron, D. Ginhac, C. Ferrel-Chapus, and P. Perruchet. "Implicit learning of a Repeated Segment in Continuous Tracking: A Reappraisal". The Quarterly Journal of Experimental Psychology, 59A: 845-854, 2006
    Show details Hide details pdf (394 KB) Original paper on Taylor Francis

    Abstract: Several prior studies (e.g., Shea, Wulf, Whitacre, & Park, 2001; Wulf & Schmidt, 1997) have apparently demonstrated implicit learning of a repeated segment in continuous-tracking tasks. In two conceptual replications of these studies, we failed to reproduce the original findings. However, these findings were reproduced in a third experiment, in which we used the same repeated segment as that used in the Wulf et al. studies. Analyses of the velocity and the acceleration of the target suggests that this repeated segment could be easier to track than the random segments serving as control, accounting for the results of Wulf and collaborators. Overall these experiments suggest that learning a repeated segment in continuous-tracking tasks may be much more difficult than learning from a repeated sequence in conventional serial reaction time tasks. A possible explanation for this difference is outlined.

    Bibtex Reference:

    @article{doi:10.1080/17470210500198585,
    	author = {Chambaron, Stephanie and Ginhac, Dominique and Ferrel-Chapus, Carole and Perruchet, Pierre},
    	title = {Implicit learning of a repeated segment in continuous tracking: A reappraisal},
    	journal = {The Quarterly Journal of Experimental Psychology},
    	volume = {59},
    	number = {05},
    	pages = {845-854},
    	year = {2006},
    	doi = {10.1080/17470210500198585},
    	URL = {http://www.tandfonline.com/doi/abs/10.1080/17470210500198585},
    	eprint = {http://www.tandfonline.com/doi/pdf/10.1080/17470210500198585}
    }
    				

  21. Writing a publication on a laptop
  22. F. Yang, M. Paindavoine, D. Ginhac, and J. Dubois. "Développement d'un système rapide pour le mosaïquage et la reconnaissance de visages panoramiques". Traitement du Signal, 22 (5) : 549-562, 2005.
    Show details Hide details pdf (576 KB) Original paper on I-Revues

    Abstract: In this article, we present some development results of a system that performs mosaicing of panoramic faces. Our objective is to study the feasibility of panoramic face construction in real-time. This led us to conceive of a very simple acquisition system composed of 5 standard cameras and 5 face views taken simultaneously at different angles. Then, we chose an easily hardware-achievable algorithm: successive linear transformation, in order to compose a panoramic face of 150° from these 5 views. The method has been tested on hundreds of faces. In order to validate our system of panoramic face mosaicing, we also conducted a preliminary study on panoramic faces recognition, based on the «eigenfaces» method. Experimental results obtained show the feasibility and viability of our system. This allows us to envisage later a hardware implantation. We also are considering applying our system to other applications such as human expression categorization using movement estimation and fast 3D face reconstruction.

    Keywords: Panoramic vision, image mosaicing, face recognition, principal Component Analysis, FFT.

    Bibtex Reference:

    @article{ts2005,
    	AUTHOR = {Yang, Fan and Paindavoine, Michel and Ginhac, Dominique and Dubois, Julien},
    	TITLE = {Développement d'un système rapide pour le mosaïquage et la reconnaissance de visages panoramiques},
    	JOURNAL = {Traitement du Signal},
    	VOLUME = {22},
    	YEAR = {2005},
    	NUMBER = {5},
    	PAGES = {549-562},
    	URL = {http://hdl.handle.net/2042/4383},
    }
    				

  23. J. Sérot and D. Ginhac, "Skeletons for parallel image processing: an overview of the SKiPPER project", Parallel Computing, 28(12), 1785-1808, 2002.
    Show details Hide details pdf (688 KB) Original paper on Science Direct

    Abstract: This paper is a general overview of the SKIPPER project, run at Blaise Pascal University between 1996 and 2002. The main goal of the SKIPPER project was to demonstrate the appli- cability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This project has produced several versions of a full-fledged integrated pa- rallel programming environment (PPE). These PPEs have been used to implement realistic vi- sion applications, such as road following or vehicle tracking for assisted driving, on embedded parallel platforms embarked on semi-autonomous vehicles. All versions of SKIPPER share a common front-end and repertoire of skeletons––presented in previous papers––but differ in the techniques used for implementing skeletons. This paper focuses on these implementation issues, by making a comparative survey, according to a set of four criteria (efficiency, expres- sivity, portability, predictability), of these implementation techniques. It also gives an account of the lessons we have learned, both when dealing with these implementation issues and when using the resulting tools for prototyping vision applications.

    Keywords: Parallelism, Skeleton, Computer vision, Fast prototyping, Data-flow.

    Bibtex Reference:

    @article{sciencedirect:10.1016/S0167-8191(02)00189-8,
    	title = "Skeletons for parallel image processing: an overview of the SKIPPER project",
    	journal = "Parallel Computing",
    	volume = "28",
    	number = "12",
    	pages = "1685 - 1708",
    	year = "2002",
    	note = "",
    	issn = "0167-8191",
    	doi = "10.1016/S0167-8191(02)00189-8",
    	url = "http://www.sciencedirect.com/science/article/pii/S0167819102001898",
    	author = "Jocelyn Sérot and Dominique Ginhac",
    	keywords = "Parallelism",
    	keywords = "Skeleton",
    	keywords = "Computer vision",
    	keywords = "Fast prototyping",
    	keywords = "Data-flow",
    }
    				

  24. J. Sérot, D. Ginhac, R. Chapuis, and J.P. Dérutin, "Fast prototyping of parallel vision applications using functional skeletons", Journal of Machine Vision and Applications, 12(6), 271-290, 2001.
    Show details Hide details pdf (390 KB) Original paper on Springer Link

    Abstract: We present a design methodology for real-time vision applications aiming at significantly reducing the design-implement-validate cycle time on dedicated parallel platforms. This methodology is based upon the concept of algorithmic skeletons, i.e., higher order program constructs encapsulating recurring forms of parallel computations and hiding their low-level implementation details. Parallel programs are built by simply selecting and composing instances of skeletons chosen in a predefined basis. A complete parallel programming environment was built to support the presented methodology. It comprises a library of vision-specific skeletons and a chain of tools capable of turning an architecture-independent skeletal specification of an application into an optimized, deadlock-free distributive executive for a wide range of parallel platforms. This skeleton basis was defined after a careful analysis of a large corpus of existing parallel vision applications. The source program is a purely functional specification of the algorithm in which the structure of a parallel application is expressed only as combination of a limited number of skeletons. This specification is compiled down to a parametric process graph, which is subsequently mapped onto the actual physical topology using a third-party CAD software. It can also be executed on any sequential platform to check the correctness of the parallel algorithm. The applicability of the proposed methodology and associated tools has been demonstrated by parallelizing several realistic real-time vision applications both on a multi-processor platform and a network of workstations. It is here illustrated with a complete road-tracking algorithm based upon white-line detection. This experiment showed a dramatic reduction in development times (hence the term fast prototyping), while keeping performances on par with those obtained with the handcrafted parallel version.

    Keywords: Parallelism, Computer vision, Fast prototyping, Skeleton, Functional programming, CAML, Road following.

    Bibtex Reference:

    @article {springerlink:10.1007/s001380050146,
    	author = {Sérot, Jocelyn and Ginhac, Dominique and Chapuis, Roland and Dérutin, Jean-Pierre},
    	affiliation = {Laboratoire des Sciences et Matériaux pour l'Electronique, et d'Automatique, Université Blaise Pascal de Clermont Ferrand, UMR 6602 CNRS, 63177 Aubière Cedex France; e-mail: Jocelyn.Serot@lasmea.univ-bpclermont.fr FR FR},
    	title = {Fast prototyping of parallel-vision applications using functional skeletons},
    	journal = {Machine Vision and Applications},
    	publisher = {Springer Berlin / Heidelberg},
    	issn = {0932-8092},
    	keyword = {Computer Science},
    	pages = {271-290},
    	volume = {12},
    	issue = {6},
    	url = {http://dx.doi.org/10.1007/s001380050146},
    	note = {10.1007/s001380050146},
    	year = {2001}
    }
    				

  25. J. Sérot, D. Ginhac, and J.P. Dérutin. "SKiPPER: a skeleton-based parallel programming environment for real-time image processing applications". Parallel Computing Technologies Lecture Notes on Computer Science 1662: 296-305. Springer, 1999.
    Show details Hide details pdf (212 KB) Original paper on SpringerLink

    Abstract: This paper presents SKiPPER, a programming environment dedicated to the fast prototyping of parallel vision algorithms on MIMD- DM platforms. SKiPPER is based upon the concept of algorithmic skele- tons, i.e. higher order program constructs encapsulating recurring forms of parallel computations and hiding their low-level implementation de- tails. Each skeleton is given an architecture-independent functional (but executable) specification and a portable implementation as a generic pro- cess template. The source program is a purely functional specification of the algorithm in which all parallelism is made explicit by means of com- posing instances of selected skeletons, each instance taking as parameters the application specific sequential functions written in C. SKiPPER compiles this specification down to a process graph in which nodes cor- respond to sequential functions and/or skeleton control processes and edges to communications. This graph is then mapped onto the target topology using a third-party CAD software (SynDEx). The result is a dead-lock free, optimized (but still portable) distributed executive, which SKiPPER finally turns into executable code for the target platform. The initial specification, written in ML language, can also be executed on any sequential platform to check the correctness of the parallel algorithm. The applicability of SKiPPER concepts and tools has been demonstrated by parallelising several realistic real-time vision applications both on a multi-DSP platform and a network of workstations. It is here illustrated with a real-time vehicle detection and tracking application.

    Keywords: Parallelism, skeleton, Caml, image processing, fast prototyping, vehicle tracking.

    Bibtex Reference:

    @incollection {springerlink:10.1007/3-540-48387-X_31,
       author = {Sérot, Jocelyn and Ginhac, Dominique and Dérutin, Jean-Pierre},
       affiliation = {LASMEA UMR 6602-CNRS Campus des Cézeaux F-63177 Aubiére Cedex France},
       title = {SKiPPER: A Skeleton-Based Parallel Programming Environment for Real-Time Image Processing Applications},
       booktitle = {Parallel Computing Technologies},
       series = {Lecture Notes in Computer Science},
       editor = {Malyshkin, Victor},
       publisher = {Springer Berlin / Heidelberg},
       isbn = {978-3-540-66363-8},
       keyword = {Computer Science},
       pages = {767-767},
       volume = {1662},
       url = {http://dx.doi.org/10.1007/3-540-48387-X_31},
       note = {10.1007/3-540-48387-X_31},
       year = {1999}
    }
    				

  26. D. Ginhac, J. Sérot, and J.P. Dérutin. "Evaluation de l'outil SynDEx en vue de prototypage rapide d'applications de traitement d'images sur machine MIMD-DM". Traitement du Signal, 14 (6) : 605-613, 1997.
    Show details Hide details pdf (904 KB) Original paper on I-Revues

    Abstract: The goal of this paper is to evaluate the SynDEx system-level CAD tool in order to estimate its usefulness for fast prototyping of image processing applications on a MIMD-DM architecture. This software can assist the programmer during the implementation of image processing applications in his constrained search for an efficient matching between algorithm and architecture. Two main conclusions were drawn from this work. First, the implementation of a connected component labeling algorithm on a multi-transputer architecture allowed us to quantify the gap between the estimated performances predicted by SynDEx and the effective performances measured on the generated executives. This gap - initially pointed out in the v3 release - is largely reduced in the v4. As part of this work, the v4 executive has been ported to T800 and T9000 targets. Second, the strong impact of the process granularity both on the easiness of the specification and the efficiency of the implementation has been evidenced. From a pragmatic point of view, this second conclusion clearly shows the advantages of a tool such as SynDEx, allowing to quickly evaluate these criterions at many granularity levels. From a more prospective point of view, the formalisation of some recurrent graph transformation rules, appearing when searching an optimal granularity, led us to the concept of algorithmic skeletons.

    Keywords: image processing, parallelism, MIMD, SynDEx, granularity, algorithmic skeletons.

    Bibtex Reference:

    @Article{ts1997,
    	AUTHOR = {Ginhac, Dominique and Serot, Jocelyn and Derutin, Jean-Pierre},
    	TITLE = {Evaluation de l'outil SynDEx en vue de prototypage rapide d'applications de traitement d'images sur machine MIMD-DM},
    	JOURNAL = {Traitement du Signal},
    	VOLUME = {14},
    	YEAR = {1997},
    	NUMBER = {6},
    	PAGES = {605-613},
    	URL = {http://hdl.handle.net/2042/2029},
    }				

Book chapters

You did not find a specific book chapter ? Just Ask for it.

  1. Woman reading a book on the beach D. Ginhac"Smart cameras on a chip: using complementary metal oxide semiconductor (CMOS) image sensors to create smart vision chips" In D. Durini (eds), High performance silicon imaging: Fundamentals and applications of CMOS and CCD sensors, Cambridge: Woodhead Publishing Limited, In Press, 2014
    Show details Hide details pdf (64 KB) Original chapter on Woodhead Publishing

    Abstract: Today, improvements in the growing digital imaging world continue to be made with two main image sensor technologies: charge coupled devices (CCD) and CMOS sensors. The continuous advances in CMOS technology for processors and memories have made CMOS sensor arrays a viable alternative to the popular CCD sensors. This led to the adoption of CMOS image sensors in several high-volume products, such as webcams, mobile phones, PDAs for example. New technologies provide the potential for integrating a significant amount of VLSI electronics into a single chip, greatly reducing the cost, power consumption, and size of the camera. By exploiting these advantages, innovative CMOS sensors have been developed. Moreover, the main advantage of CMOS image sensors is the flexibility to integrate signal processing at focal plane down to the pixel level. As CMOS image sensors technologies scale to 0.13 µm processes and under, processing units can be realized at chip level (system-on-chip approach), at column level by dedicating processing elements to one or more columns, or at pixel-level by integrating a specific processing unit in each pixel. By exploiting the ability to integrate sensing with analog or digital processing, new types of CMOS imaging systems can be designed for machine vision, surveillance, medical imaging, motion capture, pattern recognition among other applications.
    Historically, most of the researches have focused on chip and column-level processing. Indeed, pixel-level processing is generally dismissed because pixel sizes are often too large to be of practical use. However, as CMOS scales, integrating a processing element at each pixel or group of neighboring pixels becomes feasible. This offers the opportunity to increase quality of imaging in terms of resolution or noise for example by integrating specific processing functions such as correlated double sampling, anti blooming, high dynamic range, and even all basic camera functions (color processing functions, color correction, white balance adjustment, gamma correction) onto the same camera-on-chip. Furthermore, employing a processing element per pixel offers the opportunity to achieve massively parallel computations and thus the ability to exploit the high-speed imaging capability of CMOS image sensors. As integrated circuits keep scaling down following Moore’s Law, recent trends show a significant number of papers discussing the design of digital imaging systems that take advantage of the increasing number of available transistors integrated in each pixel in order to perform analog to digital conversion, data storage and sophisticated digital imaging processing.
    In this book chapter, we first survey existing works on chip-level image processing applications embedded in high-performance CMOS imaging devices. However, simply integrating analog or digital blocks operating on the pixel flow does not fully exploit the potential of CMOS imaging technologies. So, in the remainder of this chapter, we focus on column-level and on chip-level image processing in which we can benefit from massively parallel computations to integrate complex image processing applications. Finally, we survey recent trends on three-dimensional integrated imagers. 3D stacking technology becomes an emerging solution to design powerful imaging systems because the sensor, the analog-to-digital converters and the image processors can be designed and optimized in different technologies, improving the global system performance.

    Bibtex Reference:

    @incollection{ Elsevier:2014-hspi,
    	Author = {Ginhac, D.},
    	Editor = {Durini, D},
    	Title = {Smart cameras on a chip: using complementary metal oxide semiconductor (CMOS) image sensors to create smart vision chips},
    	Booktitle = {High performance silicon imaging: Fundamentals and applications of CMOS and CCD sensors},
    	Year = {2014},
    	Pages = {},
    	ISBN = {},
    	URL={http://www.woodheadpublishing.com/en/book.aspx?bookID=2683}
    	}
    				

  2. S. Chambaron, B. Berberian, L. Delbecque, D. Ginhac, and A. Cleeremans "Implicit motor learning in discrete and continuous tasks: Toward a possible explanation of discrepant results" In F. Columbus (eds), Motor Skills: Development, Impairment, and Therapy, New York: Nova Science Publishers, 139-155, 2009
    Show details Hide details pdf (15.6 MB) Original chapter on Nova Publishers

    Abstract: Can one learn implicitly, that is, without conscious awareness of what it is that one learns? Daily life is replete with situations where our behavior is seemingly influenced by knowledge to which we have little access. Riding a bicycle, playing tennis or driving a car, all involve mastering complex sets of motor skills, yet we are at a loss when it comes to explaining exactly how we perform such physical feats. Thus, while it is commonly accepted and hence unsurprising that we have little access to the cognitive processes involved in mental operations, it also appears that knowledge itself can remain inaccessible to report yet influence behavior. Reber, who coined the expression “implicit learning” in 1967, defined it as “the process whereby people learn without intent and without being able to clearly articulate what they learn” (Cleeremans, Destrebecqz, & Boyer, 1998). The research described in this chapter is positioned at the confluence of two different domains: Implicit Learning on the one hand, and Skill Acquisition on the other. The two domains have remained largely independent from each other, but their intersection nevertheless constitutes a field of primary import: the implicit motor learning field. The hallmark of implicit motor learning is the capacity to acquire skill through physical practice without conscious recollection of what elements of performance have improved. Unfortunately, studies dealing with implicit motor learning are not very abundant (Pew, 1974; Magill & Hall, 1989; Wulf & Schmidt, 1997; Shea, Wulf, Whitacre, & Park, 2001). These studies provide an apparently straightforward demonstration of the possibility of unconsciously learning the structure of a complex continuous task in a more efficient way than explicit learning allows. Nevertheless, other evidence seems to challenge this view. Indeed, recent studies (Chambaron, Ginhac, Ferrel-Chapus & Perruchet, 2006; Ooteghem, Allard, Buchanan, Oates & Horak, 2008) suggest that taking advantage from the repetition of continuous events may not be as easy as previous research leads us to believe. Indeed, these studies have suggested that sequence learning in continuous tracking tasks might be artefatctually driven by peculiarities of the experimental material rather than by implicit sequence learning per se. Consequently, a central goal of this chapter will be to reconcile these discrepant results so as to better characterize the conditions in which implicit motor learning occurs. Moreover, understanding what facilitates or prevents learning of regularities in motor tasks will be useful both in sport and in motor rehabilitation fields.

    Bibtex Reference:

    @incollection{ ISI:000275552100006,
    	Author = {Chambaron, S. and Berberian, B. and Delbecque, L. and Ginhac, D. and Cleeremans, A.},
    	Editor = {Pelligrino, LT},
    	Title = {Implicit Motor Learning in discrete and continuous tasks: Toward a possible account of discrepant results},
    	Booktitle = {Handbook of motor skills: Development, Impairment and Therapy},
    	Year = {2009},
    	Pages = {139-155},
    	ISBN = {978-1-60741-811-5},
    	Unique-ID = {ISI:000275552100006},
    	URL={https://www.novapublishers.com/catalog/product_info.php?products_id=10288}
    	}
    				

  3. D. Ginhac, F. Yang, X. Liu, J. Dang and M. Paindavoine "Robust Face Recognition System based on a multi-views face database" In K. Delac, M. Grgic and M.S. Bartlett (eds), Recent Advances in Face Recognition, Vienna, Austria: InTech Publishers, 27-38, 2008
    Show details Hide details pdf (535 KB) Original chapter on InTech

    Abstract: In this chapter, we describe a new robust face recognition system base on a multi-views face database that derives some 3-D information from a set of face images. We attempt to build an approximately 3-D system for improving the performance of face recognition. Our objective is to provide a basic 3-D system for improving the performance of face recognition. The main goal of this vision system is 1) to minimize the hardware resources, 2) to obtain high success rates of identity verification, and 3) to cope with real-time constraints. Using the multi-views database, we address the problem of face recognition by evaluating the two methods PCA and ICA and comparing their relative performance. We explore the issues of subspace selection, algorithm comparison, and multi-views face recognition performance. In order to make full use of the multi-views property, we also propose a strategy of majority voting among the five views, which can improve the recognition rate. Experimental results show that ICA is a promising method among the many possible face recognition methods, and that the ICA algorithm with majority-voting is currently the best choice for our purposes.

    Bibtex Reference:

    @incollection{intech:rafr2008,
    	Author = {Ginhac, D. and Yang, F. and Liu, X. and Dang, J. and Paindavoine, M.},
    	Editor = {K. Delac, M. Grgic and M.S. Bartlett},
    	Title = {Robust Face Recognition System based on a multi-views face database},
    	Booktitle = {Recent Advances in Face Recognition},
    	Year = {2008},
    	Pages = {37-38},
    	ISBN = {978-953-7619-34-3},
    	URL = {http://www.intechopen.com/articles/show/title/robust_face_recognition_system_based_on_a_multi-views_face_database},
    }
    				

  4. D. Ginhac, F. Yang, and M. Paindavoine "Design, Implementation and Evaluation of Hardware Vision Systems Dedicated to Real-Time Face Recognition" In K. Delac and M. Grgic (eds), Face Recognition, Vienna, Austria: InTech Publishers, 123-148, 2007
    Show details Hide details pdf (1.8 MB) Original chapter on InTech

    Abstract: Human face recognition is an active area of research spanning several disciplines such as image processing, pattern recognition, and computer vision. Most researches have concentrated on the algorithms of segmentation, feature extraction, and recognition of human faces, which are generally realized by software implementation on standard computers. However, many applications of human face recognition such as human-computer interfaces, model-based video coding, and security control (Kobayashi, 2001, Yeh & Lee, 1999) need to be high-speed and real-time, for example, passing through customs quickly while ensuring security. For the last years, our laboratory has focused on face processing and obtained interesting results concerning face tracking and recognition by implementing original dedicated hardware systems. Our aim is to implement on embedded systems efficient models of unconstrained face tracking and identity verification in arbitrary scenes. The main goal of these various systems is to provide efficient robustness algorithms that only require moderated computation in order 1) to obtain high success rates of face tracking and identity verification and 2) to cope with the drastic real-time constraints. The goal of this chapter is to describe three different hardware platforms dedicated to face recognition. Each of them has been designed, implemented and evaluated in our laboratory.

    Bibtex Reference:

    @incollection{intech:fr2007,
    	Author = {Ginhac, D. and Yang, F. and Paindavoine, M.},
    	Editor = {K. Delac, M. Grgic },
    	Title = {Design, Implementation and Evaluation of Hardware Vision Systems Dedicated to Real-Time Face Recognition},
    	Booktitle = {Face Recognition},
    	Year = {2007},
    	Pages = {123-148},
    	ISBN = {978-3-902613-03-5},
    	URL = {http://www.intechopen.com/articles/show/title/design__implementation_and_evaluation_of_hardware_vision_systems_dedicated_to_real-time_face_recogni,
    }
    				

Conference proceedings

You did not find a specific conference proceedings ? Just Ask for it.

  1. D. Pellion, K. Jradi, N. Brochard, D. Prêle, and D. Ginhac, "Dark Count rate measurement in Geiger mode and simulation of a photodiode array, with CMOS 0.35 technology and transistor quenching" in Proceedings of New Developments in Photodetection – NDIP 2014, Tours, France, July 2014.
  2. PJ. Lapray, B. Heyrman, and D. Ginhac, “HDR-ARtiSt: A 1280 × 1024-pixel adaptive real-time smart camera for high-dynamic range video”, in Real-Time Image and Video Processing, SPIE Photonics Europe, Brussels, Belgium, 14-17 April 2014.
  3. PJ. Lapray, B. Heyrman, M. Rosse, and D. Ginhac, “A 1.3 megapixel FPGA-based smart camera for high dynamic range real time video”, in Seventh ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC), Palm Springs, United States, 29 Oct, 1 Nov, 2013.
  4. D. Prêle, D. Franco, D. Ginhac, K. Jradi, F. Lebrun, S. Perasso, D. Pellion, A. Tonazzo, and F. Voisin. “SiPM cryogenic operation down to 77 K”, in 10th International Workshop On Low Temperatures Electronics (WOLTE10), Paris, France, 14-17 Oct 2013.
  5. Reading papers on an iPad and an iPhone
  6. PJ. Lapray, B. Heyrman, and D. Ginhac, “HDR-ARtiSt : une caméra intelligente dédiée à la vidéo à grande dynamique en temps reel”, in 24ème colloque Gretsi, Brest, France, 2- 6 Sep 2013.
  7. PJ. Lapray, B. Heyrman, and D. Ginhac. “A smart camera for High Dynamic Range imaging”, in Second Workshop on Architecture of Smart Camera (WASC), Sevilla, Spain, 3-5 Jun 2013.
  8. F. Hamdi, T. Tockzek, B. Heyrman, and D. Ginhac, “Scene-based noise reduction on a smart camera”, in IEEE Conference on Electronics, Circuits, and Systems (ICECS), Sevilla, Spain, 9-12 Dec 2012.
  9. PJ. Lapray, B. Heyrman, M. Rosse, and D. Ginhac, “High Dynamic Range Real-time Vision System for Robotic Applications”, in 1st Workshop on Smart Camera for Robotic Application (SCABOT), IEEE Intelligent Robots and Systems (IROS), Vilamoura, Portugal, 7-12 Oct 2012.
  10. P-J. Lapray, B. Heyrman, M. Rosse, and D. Ginhac, "HDR-ARtiSt: High Dynamic Range Advanced Real-time imaging System", in IEEE International Symposium on Circuits and Systems (ISCAS 2012), Seoul, Korea, 20-23 May 2012.
  11. K. Jradi, D. Pellion, and D. Ginhac, “Multi-pixel Geiger mode imager for medical applications”, in International Workshop on New Photon-Detectors (PhotoDet), Orsay, France, June 13-15, 2012.
  12. PJ. Lapray, B. Heyrman, M. Rosse, and D. Ginhac, “Smart camera design for realtime High Dynamic Range imaging”, in 1st IEEE/ACM Workshop on Architecture of Smart Camera (WASC), Clermont Fd, 5-6 April 2012.
  13. Conference Speaker
  14. P-J. Lapray, B. Heyrman, M. Rosse, and D. Ginhac, "Smart camera design for realtime high dynamic range imaging", in Fifth ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC), Ghent, Belgium, pp.1-7, 22-25 Aug. 2011.
  15. V. Brost, D. Saptono, C. Meunier, F. Yang and D. Ginhac, "Modular VLIW processor based on FPGA for real-time image processing", in 23eme Colloque GRETSI sur le traitement du signal et des images, Bordeaux, France, Sept 2011.
  16. F. Yang, V. Brost, A. Poinsot, D. Ginhac and Z. Mu, "Implémentations matérielles d’un système biométrique bimodal", in 23eme Colloque GRETSI sur le traitement du signal et des images, Bordeaux, France, Sept 2011.
  17. D. Ginhac, “Smart Cameras and Visual Sensor Networks”, in Embedded Systems Week (ESWEEK), Grenoble, 11-16 Oct 2009.
    See http://esweek09.inrialpes.fr and https://pervasive.aau.at/SCSN_tutorial/ for more details.
  18. M. K. Nguyen, C. Driol, T. T. Truong, M. Paindavoine, and D. Ginhac, “Nouvelle tomographie Compton”, in 22eme Colloque GRETSI sur le traitement du signal et des images, Dijon, France, 2-6 Sep 2009.
  19. S. Chambaron, A. Pasquali, D. Ginhac, and A. Cleeremans, "The role of pace and time in sequence learning: What is the impact on learning?", in Eighth Annual Summer Interdisciplinary Conference (ASIC 2009), Sarre, Italy, July 22-27, 2009.
    pdf not available
  20. S. Chambaron, N. Berg, D. Ginhac, A. Cleeremans, and P. Peigneux, "Learning discrete and continuous regularities in two-dimensional settings", in Eighth Annual Summer Interdisciplinary Conference (ASIC 2009), Sarre, Italy, July 22-27, 2009.
    pdf not available
  21. Pasquali, S. Chambaron, D. Ginhac, and A. Cleeremans, “Incidental learning of interactions between motor and linguistic sequences”, in 13th Meeting of the Association for the Scientific Study of Consciousness (ASSC), Berlin, Germany, 5-8 June, 2009.
    pdf not available
  22. J. Dubois, D. Ginhac, and M. Paindavoine, "A Programmable Vision Chip with High Speed Image Processing", in 28th International Congress on High-Speed Imaging and Photonics (ICHSIP 28), Canberra, Australia, November 9-14, 2008
  23. S. Chambaron, A. Destrebecqz, D. Ginhac, and A. Cleeremans, "Influence of response-stimulus interval (RSI) on sequence learning", in XXIX International Congress of Psychology ICP, Berlin, Germany, July 20-25, 2008.
    pdf not available
  24. S. Chambaron, A. Destrebecqz, D. Ginhac, and A. Cleeremans, "The role of time and pace in sequence learning", in 12th annual meeting of the Association for the Scientific Study of Consciousness (ASSC12), Taipei, Taiwan, June 19-22, 2008.
    pdf not available
  25. S. Chambaron, D. Ginhac, A. Cleeremans, and P. Peigneux, "Learning discrete and continuous regularities in two-dimensional settings", in BAPS 2008 annual meeting, University of Leuven, Belgium, May 26, 2008.
    pdf not available
  26. S. Chambaron, A. Destrebecqz, D. Ginhac, and A. Cleeremans, "Influence of the response–stimulus interval on implicit sequence learning: constant vs. variable RSIs", in BAPS 2008 annual meeting, University of Leuven, Belgium , May 26, 2008.
    pdf not available
  27. S. Chambaron, L. Delbecque, D. Ginhac, D. Holender, and A. Cleeremans, "A. Action, Observation et Imagerie Mentale : Apports de l'apprentissage implicite au domaine moteur", in Journées de la Société Française de Psychologie du Sport SFPS 2008, Quiberon, France, 25-29 Mars, 2008.
    pdf not available
  28. Lot of scientific books
  29. J. Dubois, D. Ginhac, and M. Paindavoine, "A Multi-Processing 10 000 frames/s CMOS Image Sensor", in Workshop on Design and Architectures for Signal and Image Processing, DASIP 2007, Grenoble, France, November, 2007.
  30. E. Prasetyo, H. Afandi, N. Huda, D. Ginhac, and M. Paindavoine, "A 8 bits Pipeline Analog to Digital Converter Design for High Speed Camera Application", in The Eigth Industrial Electronic Seminar, ITS Surabaya (Indonesia), Nov 2007.
  31. J. Dubois, D. Ginhac, and M. Paindavoine, "Un Capteur d'Images Reconfigurable dédié à l'Imagerie Rapide, aux Traitements d'Images Linéaires et Réseaux Convolutifs", in 8ème colloque sur le Traitement Analogique de l'Information, du Signal et ses Applications, TAISA 2007, Lyon, France, Octobre 2007.
  32. M. Paindavoine, J. Dubois, R. Mosqueron, B. Heyrman. J. Dubois, and D. Ginhac, "High speed camera with embedded image processing", in 6th International Workshop on Embedded System, Vaasa, Finland, September, 2007.
    pdf not available
  33. S. Chambaron and D. Ginhac, "Procedural Variations around a SRT task", in BAPS 2007 - Belgian Association for Psychological Science, Louvain la Neuve, Belgium, June, 2007.
    pdf not available
  34. S. Chambaron and D. Ginhac, "Implicit learning of sequences: discretness versus continuity", in European Workshop On Movement Science - EWOMS 2007, Amsterdam, Pays-Bas, Juin 2007.
    pdf not available
  35. J. Dubois, D. Ginhac, and M. Paindavoine, "VLSI design of a high-speed CMOS image sensor with in-situ 2D programmable processing", in 14th European Signal Processing Conference (EUSIPCO 2006), Florence, Italy, Sept 2006.
  36. J. Dubois, D. Ginhac, and M. Paindavoine, "A single-chip 10 000 frames/s CMOS sensor with in-situ 2D programmable image processing", in IEEE International Workshop on CAMPS 2006, Montreal, Quebec, Canada, Sept 2006.
  37. J. Dubois, D. Ginhac, and M. Paindavoine, "Design of a 10 000 frames/s CMOS sensor with in-situ image processing", in International Workshop on Reconfigurable Communication-centric Systems-on-Chip (RECOSOC'2006), Montpellier, France, July 2006.
  38. S. Chambaron, D. Ginhac, and P. Perruchet, "Is Learning in SRT Tasks Robust Across Procedural Variations?", in The Twenty-Eighth Annual Conference of the Cognitive Science Society - CogSci 2006, Vancouver, Canada, July 2006.
  39. N.S Salahuddin, M. Paindavoine, D. Ginhac, M. Parmentier, and N Tambda, "Conception de photodiodes CMOS dédiées à l’Imagerie Gamma. Imagerie pour les Sciences du Vivant", in Instrumentation IMVIE-3, ECRIN, Paris, France, Juin 2006.
    pdf not available
  40. P.K Tahej, D. Ginhac, I. Olivier, and C. Ferrel-Chapus, "Structuration de l’espace et conversion des informations visuelles en coordonnées motrices", in 11ème Congrès International de l’ACAPS, Paris, France, Oct. 2005.
    pdf not available
  41. S. Chambaron, D. Ginhac, and P. Perruchet, "Apprentissage moteur implicite: variations autour d’une tâche TRS", in Congrès national de la Société Française de Psychologie, Nancy, France, Sept. 2005.
    pdf not available
  42. E. Prasetyo, D. Ginhac, and M. Paindavoine, "Design and Implementation of a 8 bits Pipeline Analog to Digital Converter in a 0.6 µm CMOS technology", in Indonesian Student's Scientific Meeting in Europe 2005, Paris, Sept 2005.
  43. D. Ginhac, E. Prasetyo, M. Paindavoine, and B. Heyrman, "Principles of a CMOS sensor dedicated to face tracking and recognition", in IEEE International Workshop on CAMP 2005, Palermo, Italy, pp. 33-38, July 2005.
  44. S. Chambaron, D. Ginhac, and P. Perruchet, "Implicit motor learning in discrete vs continuous tasks", in European Workshop On Movement Science, Vienna, Austria, June 2005.
    pdf not available
  45. D. Ginhac, E. Prasetyo, and M. Paindavoine, "CMOS sensor for face tracking and recognition", in 26th International Congress on High Speed Photography and Photonics (HSPP’04), Alexandria (Virginia USA), Sept 2004.
  46. Stack of papers
  47. N.S. Salahuddin, D. Ginhac, M. Paindavoine, M. Parmentier, and N. Tamda, "A CMOS image sensor dedicated to medical gamma camera application", in 26th International Congress on High Speed Photography and Photonics (HSPP’04), Alexandria (Virginia USA), Sept 2004.
  48. D. Ginhac, E. Prasetyo, and M. Paindavoine, "Localisation et Reconnaissance de visages : Vers une implantation sur silicium", in IEEE Signaux, Circuits et Systèmes 2004, Monastir, Tunisie, Mars 2004.
  49. N. Malasne, F. Yang, M. Paindavoine, and D. Ginhac, "Face Tracking and Recognition: from algorithm to implementation", in SPIE’s 47th Annual Meeting, Advanced Signal Processing Algorithms, Architecture and Implementations, Seattle, USA, Aug 2002.
    pdf not available
  50. N. Malasne, F. Yang, M. Paindavoine, and D. Ginhac, "Implantation temps réel d'un algorithme de localisation et de reconnaissance de visages sur un FPGA", in 18eme Colloque GRETSI sur le traitement du signal et des images, Toulouse, France, Sept 2001.
  51. N. Malasne, F. Yang, M. Paindavoine, and D. Ginhac, "RBF Neural Networks Applied to Face Tracking and Recognition", in QCAV 2001, Le Creusot, France, May 2001.
  52. D. Ginhac, J. Sérot, J.P. Dérutin, and R. Chapuis, "SKiPPER ; un environnement de programmation parallèle fondé sur les squelettes et dédié au traitement d'images", in 17eme Colloque GRETSI sur le traitement du signal et des images, pages 1209-1212, Vannes, France, Sept 1999.
  53. D. Ginhac, J. Sérot, and J.P. Dérutin, "Fast prototyping of image processing applications using functional skeletons on MIMD-DM architecture", in IAPR Workshop on Machine Vision Applications, pages 468-471, Chiba, Japan, Nov. 1998.
  54. D. Ginhac, J. Sérot, and J.P. Dérutin, "Utilisation de squelettes fonctionnels au sein d'un outil d'aide à la parallélisation", in 4èmes Journées Adéquation Algorithme Architecture en Traitement du Signal et Image, Saclay, France, Jan 1998.
  55. D. Ginhac, J. Sérot, and J.P. Dérutin, "Vers un outil d'aide a la parallélisation fondé sur les squelettes", in 16eme Colloque GRETSI sur le traitement du signal et des images, Grenoble, France, Sept 1997.
  56. D. Ginhac, J. Sérot, and J.P. Dérutin, "Evaluation de l'outil SynDEx pour l'implantation d'un algorithme d'étiquetage en composantes connexes sur la machine Transvision", in 3èmes Journées Adéquation Algorithme Architecture en Traitement du Signal et Image, Toulouse, Jan 1996.
reading a book at library