Jun 22, 2015 – Jun 26, 2015
New York City, NY, USA
Abstract Registration Due: Feb 10, 2015
Submission Deadline: Feb 17, 2015
Notification Due: Apr 7, 2015
Final Version Due: Apr 28, 2015
ACM WiSec 2015 will run from June 22 to June 26, 2015. It will be co-located with RFIDSec’15 in New York City, US.
ACM WiSec is the leading ACM and SIGSAC conference dedicated to all aspects of security and privacy in wireless and mobile and mobile networks and their applications. In addition to the traditional ACM WiSec topics of physical, link, and network layer security, we welcome papers focusing on the security and privacy of mobile software platforms, usable security and privacy, biometrics, cryptography, and the increasingly diverse range of mobile or wireless applications such as Internet of Things, and Cyber-Physical Systems. The conference welcomes both theoretical as well as systems contributions. Continue reading ‘[Call for Papers] WiSec 2015 : The 8th ACM Conference on Security and Privacy in Wireless and Mobile Networks’ »
2 pm – 3 pm
2116 Hornbake Bldg, South Wing
Mercedes Torres, PhD
As a post-‐doctoral student at the University of Nottingham, she is focused on interdisciplinary research, specifically in the area of Fine-‐Grained Visual Categorization, Image Processing and Analysis, and Machine Learning. She has designed and developed an image annotation framework for Phase 1 habitat classification in ground-‐taken photographs.
Currently, habitat classification (the process of mapping an area with the habitats present on it) is carried out by human surveyors.This is expensive, time consuming, laborious and subjective. What I have done is develop the first complete automatic alternative for the Phase 1 classification Scheme, widely used in the UK. The problems itself is quite complicated, giving the semantic similarities between the classes I have to recognize. I have approached habitat classification as an image annotation problem and created a complete framework for it, composed of 5 elements: the source data, features extracted from these data (low and medium), a novel machine learning classifier called Random Projection Forests and a location-‐ based voting system for my classifier. Moreover, I have used a novel source of information as the input of this framework: ground-‐taken geo-‐referenced photographs (which can be photographs taken with a mobile phone). Current state-‐of-‐the-‐art normally uses remote-‐sensed imagery, but this is not detailed enough to distinguish between vegetation species, etc.,so ground-‐taken photos are actually better alternatives. Additionally, I have created a new ensemble classifier, called Random Projection Forests and based on Random Forests, but much more efficient and accurate. Results show that my complete framework can successfully classify 7 out of the 10 main classes of Phase 1, which is quite good considering that this type of work has never been done before with the type of data I am using and the approach I have chosen.