PhD grants : Intelligent assistant to help blind people move around (MIPTIS)
Save to favorites-
ORGANISATION NAMEUniversity of Tours
-
ORGANISATION COUNTRYFrance
-
FUNDING TYPEFundingMobility Incoming
-
DEADLINE DATE30/04/2021
-
RESEARCH FIELDProfessions and applied sciences
-
CAREER STAGEFirst Stage Researcher (R1) (Up to the point of PhD)
Description
- Keywords
Artificial intelligence, Human-machine Interface, Deep leaning, Data mining, Spatiotemporal data, Smartphone
- Profile and skills required
The candidate should ideally hold a Master's degree in Computer Science with mandatory
- Machine learning and data mining knowledge
- Good programming skills
And potentially :
- Knowledge in human-machine interface.
- Knowledge in image processing.
- Knowledge in geomatics.
- Smartphone programming skills
- Project description
In recent years, particularly with the emergence of deep learning, AI has made significant advances and its use is spreading to many areas of human activity. Some of these sectors are very promising economically (such as autonomous driving of a vehicle, for example), others are less so and therefore benefit less from these advances. We have made this observation on numerous occasions in the case of our previous research work on interfaces for the large prevented public (see https://sculpture3d.univ-tours.fr/). During this work, we had the opportunity to meet young visually impaired people at IRECOV (Institut de Rééducation et d'Education pour la Communication, l'Ouïe et la Vue, based in Tours). These young disabled people are subject to a paradox: on the one hand, access to digital resources is a real revolution for them, but on the other hand, they are often the ones left behind by this progress.
The aim of this thesis is therefore to contribute to the production of an intelligent assistant designed to help the visually handicapped. Our starting point will concern travel assistance, with three interrelated aspects. Intelligent assistance must allow the recognition of objects in the environment (everyday objects, personal objects, mobile objects, fixed obstacles, etc.), the mapping of this environment to adapt the movement to the direct environment, the definition of adequate Human-machine interfaces.
- References
Ahmetovic, D., D. Sato, U. Oh, T. Ishihara, K. Kitani, et C. Asakawa (2020). Recog : Supporting blind people in recognizing personal objects. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–12.
Anas Majid, P., V. Nikhitha, K. Smrithi, et V. Basheer (2020). Machine learning based mobility aid for blind people. Machine Learning 7 (1).
Cohen, A. et S. Dalyot (2020). Machine-learning prediction models for pedestrian traffic flow levels : Towards optimizing walking routes for blind pedestrians. Transactions in GIS 24(5), 1264–1279.
Grayson, M., A. Thieme, R. Marques, D. Massiceti, E. Cutrell, et C. Morrison (2020). A dynamic ai system for extending the capabilities of blind people. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–4.
Hashmi, M. F., V. Gupta, D. Vijay, et V. Rathwa (2020). Computer vision-based assistive technology for helping visually impaired and blind people using deep lear- ning framework. In Handbook of Research on Emerging Trends and Applications of Machine Learning, pp. 577–598. IGI Global.
Jabnoun, H., M. A. Hashish, et F. Benzarti (2020). Mobile assistive application for blind people in indoor navigation. In International Conference on Smart Homes and Health Telematics, pp. 395–403. Springer.
Kandalan, R. N. et K. Namuduri (2020). Techniques for constructing indoor navigation systems for the visually impaired : A review. IEEE Transactions on Human-Machine Systems.
Khalfi, B., C. De Runz, S. Faiz, et H. Akdag (2017). Extending F-Perceptory to model fuzzy objects with composite geometries for GIS. Transactions in GIS 21(6), 1364– 1378.
Kuriakose, B., R. Shrestha, et F. E. Sandnes (2020). Tools and technologies for blind and visually impaired navigation support : A review. IETE Technical Review, 1–16.
Lock, J. C., I. D. Gilchrist, I. D. Gilchrist, G. Cielniak, et N. Bellotto (2020). Experi- mental analysis of a spatialised audio interface for people with visual impairments. ACM Transactions on Accessible Computing (TACCESS) 13(4), 1–21.
Martineau, M., R. Raveaux, C. Chatelain, D. Conte, et G. Venturini (2018). Effective training of convolutional neural networks for insect image recognition. In J. Blanc- Talon, D. Helbert, W. Philips, D. C. Popescu, et P. Scheunders (Eds.), Advanced
Concepts for Intelligent Vision Systems, 19th International Conference, ACIVS 2018, Poitiers, France, September 24-27, 2018, Proceedings, Volume 11182 of Lecture Notes in Computer Science, pp. 426–437. Springer.
Martineau, M., R. Raveaux, D. Conte, et G. Venturini (2020). Learning error-correcting graph matching with a multiclass neural network. Pattern Recognition Letters 134, 68–76.
Mekhalfi, M. L., F. Melgani, A. Zeggada, F. G. De Natale, M. A.-M. Salem, et A. Kha- mis (2016). Recovering the sight to blind people in indoor environments with smart technologies. Expert systems with applications 46, 129–138.
Rahman, M. M., M. M. Islam, S. Ahmmed, et S. A. Khan (2020). Obstacle and fall detection to guide the visually impaired people with real time monitoring. SN Computer Science 1, 1–10.
Rocha, D., V. Carvalho, F. Soares, et E. Oliveira (2019). Extracting clothing features for blind people using image processing and machine learning techniques : First insights. In ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing, pp. 411–418. Springer.
Wang, L., M. Famouri, et A. Wong (2020). Depthnet nano : A highly compact self-normalizing neural network for monocular depth estimation. arXiv preprint arXiv :2004.08008.
Wang, L., A. Patnik, E. Wong, J. Wong, et A. Wong (2018). Oliv : An artificial intelligence-powered assistant for object localization for impaired vision. Journal of Computational Vision and Imaging Systems 4(1), 3–3.
Yang, G. et J. Saniie (2020). Sight-to-sound human-machine interface for guiding and navigating visually impaired people. IEEE Access 8, 185416–185428.
Yuksel, B. F., P. Fazli, U. Mathur, V. Bisht, S. J. Kim, J. J. Lee, S. J. Jin, Y.-T. Siu,
J. A. Miele, et I. Yoon (2020). Human-in-the-loop machine learning to increase video accessibility for visually impaired and blind users. In Proceedings of the 2020 ACM Designing Interactive Systems Conference, pp. 47–60.
To apply, click here
Disclaimer:
The responsibility for the funding offers published on this website, including the funding description, lies entirely with the publishing institutions. The application is handled uniquely by the employer, who is also fully responsible for the recruitment and selection processes.