We have several physical rooms with people visually checking objects on an android device.
The device is not using its embedded webcam but a webcam, fixed in the room. The device is just a remote control for the checking process.
Image processing is heavy Thus we wish to use a powerful computer for that and in an asynchronous way.
- one fixed webcam and one android device per room
- every webcam is connected to the same PC
- dispatch videos from webcams to devices
-and process video frames on request
We want Unity on PC and devices (=android devices)
Room : a room with a webcam in a fixed transform (position, rotation)
A room has :
- a name
- a camera
Camera : a webcam with some parameters like instrinsic parameters, transform. There could be up to 6 webcams.
A camera has intrinsics and transform (consider it's just an xml)
Object : an object, identified by an id. It will be downloaded from a remote server. Objects will be inserted in rooms for validation by an operator.
In this document, object has only an id.
User : a user. Identified by a name. Probably not more than 2 will be connected at the same time. 6 maximum.
Server side (=on PC)
Functionally, there are 2 scenes:
- one to start/stop the server and manage rooms/camera
- one to calibrate a camera/room = finding parameters for this room
- start/stop the server and see connected rooms
manage cameras and rooms. A menu button displays a simple UI with:
-dropdown cameras (list the connected camera devices)
- dropdown rooms (list the created rooms)
It is possible to create/edit a new room and attach a camera
it is possible to see the video feed from a selected camera (purpose : checking the selected camera is indeed the one we want to use)
a button allows to calibrate a selected room. Opening a new scene : calibration scene
No recommendation for UI.Can be very basic.
It displays the webcam feed corresponding to the camera of the above selected room
when the user clicks on a button, it freezes the feed
and the user will do some operations. Not included : we will calibrate the camera and find its transform in the room.
the user can go back to live feed
when the user returns to live feed or Main scene, parameters of the camera are saved (make it a dummy xml for now)
Always (if the server is started), in the background
The server :
- streams videos live to rooms given the (cameraId, roomId). So each connected user can see the camera feed corresponding to the room he's in.
- manage requests from users. Could be "getObjectPosition(roomId,objectId): transform"
->it runs a heavy process. It processes the last frame of corresponding webcam. So it will use the camera parameters and frame to find the object transform. It takes up to a few seconds : make a dummy one in a separate thread (not to block everything).
->once request is processed, the server notifies the room with a transform info. Make it random or based on a simple image feature.
NB : the app on device should not be frozen.
NB : OK if we can calibrate a room only if the server is stopped.
-A user start the app (android app).
He's offered to connect to the server. He must specify the room he's in.
ip and port of the server should be editable but persisted (playerpref?)
We consider the device will be attached to a room. So, default room id should be editable (best is dropdown list) but persisted (playerpref?)
Once connected, he can see the video feed of the room he's in.
- By pressing a button he requests the object position (getObjectPosition, see above).
Last Calculated Transform should be displayed on screen = text for instance or placing a dummy object.
A button allows to disconnect.
It's an inhouse project. No remote server.
Unity version can be discussed. We use 2017.3.1f1 for now. So : not below this version.
Unity project will all sources to deploy app on PC and device
About the recuiterMember since May 20, 2018 Lalit Movaliya
from Tamil Nadu, India