Saturday 31 March 2012

Surface project - part 17 (Major Demo - Rich Media Map - Animation)

It will be interesting to see some animation inside the demo. I have found one place to demonstrate it. When user searches the push pin based on tag information, an animating ellipse will be displayed around matched push pins. WPF provide three animation class, DoubleAnimation, ColorAnimation and PointAnimation.

To implement the animation on the ellipse is not quite straightforward, as if you attach the color animation on the ellipse to change color of the stroke, then you will receive exception says that color cannot apply to brush.

The solution is to attach the color animation on the stroke of the ellipse.

Following code shows how to attach color animation on ellipse's stroke.

ColorAnimation cAnimation = new ColorAnimation();
            cAnimation.From = Colors.Red;
            cAnimation.To = Colors.Green;
            cAnimation.Duration = new Duration(TimeSpan.FromSeconds(1));
 
            cAnimation.AutoReverse = true;
            cAnimation.RepeatBehavior = RepeatBehavior.Forever;
            string name = GetUniqueName();
            Mediator.TheWindow.RegisterName(name, ellipseborder);
            myStoryboard.Children.Add(cAnimation);
            Storyboard.SetTargetName(cAnimation, name);
            Storyboard.SetTargetProperty(cAnimation, new PropertyPath(SolidColorBrush.ColorProperty));

Following snaps show the effect of the ellipse around the push pin.



The color of the stroke of the ellipse is changing from red to green.

Surface project - part 16 (Major Demo - Rich Media Map - Dynamic Panel Size)

As I have said in the major demo introduction, there is no existing solution to resize the scatter view item only in one direction. (at least, i haven't found it!)

To make the demo application as elegant as it could, I decide to programmatically control the size of the push pin panel.

The width of the push pin panel

As I use a list box to contain all the element, and it is necessary to display the vertical scroll bar when the content exceeds the scroll viewer. However, if you make the scroll bar visible or auto, the space for the scroll bar will always be displayed no matter if the scroll bar is showing. So I need to change the width of the push pin panel when the scroll bar is unneeded.

The surface list box contains a scroll view which in term contains the scroll bar. There is a property called ScrollableHeight, which can help us to decide if the scroll bar is necessary. And the LayoutUpdated event will be fired when that property has changed. Through debugging, I have found the path to access the scroll view.

Grid grid = VisualTreeHelper.GetChild(TheSurfaceListBox, 0) as Grid;
ScrollViewer scroll = grid.Children[0] as ScrollViewer;

We can then change the width of the push pin panel so that it will clip the area for the scroll bar when it is not needed.


The height of the push pin panel

WPF layout system is complicated, you will encounter a lot of null exception as the width or height on a particaliar control is not set. However, sometimes we cannot set a predefined size as the size will be vary in different situation. I found the min height on each element is suitable for decide the height of the push pin panel which contains all the elements.

Surface project - part 15 (Major Demo - Rich Media Map - Drag&Drop)

For this major demo, the supervisors want to let user rearrange the index of the media in the push pin panel. Thus user can drag and drop the media on top of another media so that the position of the media changed. Although drag and drop is not a fancy stuff in today's programming world, there are still some notable aspects which make the implementation in Surface not that easy.

Problem 1

As I put the media in a list box, the touch down event would be fired by the list box at first, then the event will route up and down. However, if I implement to capture the touch down event to start a drag action then I will encounter an annoying but unavoidable problem, the user cannot scroll or touch any part inside each media component. There is an solution provided by MSDN, although it is not perfect, but still acceptable. The application will let user to start a drag action when user put at least 2 fingers on the element. After that they can user one finger instead.

Problem 2

The supervisors want the dragging element under finger will have the same appearance compare to the original element. As the media components are composited with mangy controls, thus it will cost a lot to duplicate the whole media component. WPF provides RenderTargetBitmap as an solution to achieve the same result by displaying an image which snaps the visual tree.

 public Image DragSourceCopy()
        {
            Size dpi = new Size(96, 96);
            RenderTargetBitmap bmp =
              new RenderTargetBitmap((int)(this.DesiredSize.Width), (int)(this.DesiredSize.Height),
                dpi.Width, dpi.Height, PixelFormats.Pbgra32);

            bmp.Render(this);
            JpegBitmapEncoder encoder = new JpegBitmapEncoder();
            encoder.Frames.Add(BitmapFrame.Create(bmp));
            Image image = new Image();
            image.Source = encoder.Frames[0];
            return image;
        }

By using the above piece of code, you can make a copy image of the whole component, no matter it is a video, image or text media. After tests, the DesiredSize records the correct width and height of the component shown on the screen.

Following snaps show the drag drop action to rearrange the media in push pin panel



After drop, the index of the element changed.

Surface project - part 14 (Major Demo - Rich Media Map - Image Component)

For supervisors, they want the image component can be editable, which means user can add their own drawing on to existing image, or create a new image instead. There is an sdk example which does the same thing. In the example, user can draw, erase and undo stroke, however, user cannot choose the thickness of the stroke nor save the image. As this application, the media map, is all about interactive with push pin, but not a painting application, thus I remove the undo function and implement the function to choose the stroke's thickness and also the ability to save the drawing.

Following code shows how to change the thickness of the stroke

     private void StrokeSizeSlider_ValueChange (object sender, RoutedPropertyChangedEventArgs<double> e)
    {
      DrawingPadCanvas.DefaultDrawingAttributes.Width = StrokeSizeSlider.Value;
      DrawingPadCanvas.DefaultDrawingAttributes.Height = StrokeSizeSlider.Value;
    }

Following snaps show how to create a new image.





Following snaps show how to draw on an existing image



Following code shows how to implement saving function.

 private void SaveImage_TouchDown(object sender, TouchEventArgs e)
    {
      string path = "SavedImage.jpg";
      MemoryStream ms = new MemoryStream();
      FileStream fs = new FileStream(path, FileMode.Create);
      RenderTargetBitmap rtb = new RenderTargetBitmap((int)DrawingPadCanvas.Width, (int)DrawingPadCanvas.Height, 96d, 96d, PixelFormats.Default);
      rtb.Render(DrawingPadCanvas);
      JpegBitmapEncoder encoder = new JpegBitmapEncoder();
      encoder.Frames.Add(BitmapFrame.Create(rtb));
      encoder.Save(fs);
      fs.Close();
    }


After saving the image, it will be stored in local computer

Friday 30 March 2012

Surface project - part 13 (Major Demo - Rich Media Map - Introduction)

A major demo application has been required by the supervisor. As the supervisor is on a two-week trip, thus I will have two weeks to accomplish it. The major demo requires to integrate the demos I built previously for an interactive map, and this time they need a more impressive look and feel. Before he left, we briefly discussed the user case for the application and the user interface as well. Below is the picture scripted by the supervisor.

As this time, the scale of the application is large, and the requirements are challenged, thus a detailed application specification and a few backbone applications are necessary to build. This post will record the user case and user interface specification.



Major Demo - Rich Media Map - User case
  • user can place a push pin on to the map
  • user can view the content inside the push pin by click the pin
  • user can import image files into the push pin
  • user can import video files into the push pin
  • user can import text files into the push pin
  • user can add drawing on to the image, which is imported into the push pin
  • user can modify the text information, which is imported into the push pin
  • user can create a drawing and the drawing should be imported into the push pin automatically after finishing drawing.
  • user can create a text information and the text information should be imported into the push pin automatically after finishing typing.
  • user can create a video from web camera and the video should be imported into the push pin automatically after finishing recording.
  • user can remove any media from the push pin.
  • user can assign 5 tags for the push pin, including culture, education, environment, health and other.
  • user can set a main tag for the push pin from culture, education, environment, health and other, the appearance of the push pin will be changed.
  • user can search the push pins by choosing one of the 4 tags (culture, education, environment, health).
  • user can save the push pin as XML file, thus the push pin will be persisted and reusable for other applications.
  • user can delete push pins from the map.
Major Demo - Rich Media Map - User interface
  • a main panel should displayed when user opens a push pin.
  • there should be a sub panel, which contains all the operation buttons, at the bottom of the main panel.
  • a file browser should be displayed when user chooses to import a media file.
  • the media elements inside the push pin should be layouted as a list.
  • user can rearrange the index of any media element by drag and drop the media inside the main panel.
  • The switch between view and import mode for the push pin should be an in-place switch.
  • The switch between view and drawing mode for the image media should be an in-place switch.
  • The switch between view and typing mode for the text media should be an in-place switch.
  • The switch between view and recording mode for the video media should be an in-place switch.
Major Demo - Rich Media Map - File browser functionality
  • user can go into a directory.
  • user can go up to parent directory.
  • when user choose to import a media file to the push pin, he can select a media file from the browser.
  • file browser is integrated into the push pin panel.
Major Demo - Rich Media Map - Image component functionality
  • user can change the view mode into drawing mode.
  • in drawing mode, user can select the color of the brush.
  • in drawing mode, user can select the thickness of the brush.
  • in drawing mode, user can draw on to a canvas by moving fingers.
  • in drawing mode, user can erase the drawing.
  • in drawing mode, user can save the drawing.
Major Demo - Rich Media Map - Text component functionality
  • user can change the view mode into editing mode.
  • in editing mode, user can modify the content of the text.
  • in editing mode, user can save the modification.
Major Demo - Rich Media Map - Video component functionality
  • user can play the video.
  • user can pause the video.
Major Demo - Rich Media Map - Video recorder component functionality
  • user can preview the video stream from the camera.
  • user can start recording the live video.
  • user can finish recording the live video.

Additional user interface compromise and solution

compromise: As there is no existing solution to resize the WPF scatter view item in only vertical direction (the default behaviour is scale in both vertically and horizonly), user cannot change the size of the push pin panel.

solution: To achieve an elegant layout, the height of the push pin panel will be controlled programmatically. There will be a fixed width and a maximum height, so that no empty area will be show when the push pin panel only contains small number of media. A scroll bar will appear when the total height of the content exceeds the panel's maximum height.

Saturday 24 March 2012

Surface project - part 12 (demo - Map Media List)

As the supervisors want to store multiple media file into single push pin, thus I build a powerful demo to demonstrate this capability. The following demo allows the user to add image and video file onto the push pin. And the application treats different media file differently. For image file, users can import text file as the description for the image. See snap shoot below.




The push pin can store video file as well.



The following steps demonstrate the complete procedure

1. long press a point on the map will add a new push pin onto the map.



2. click the pushpin will pop up an empty panel(if you haven't add any media into it)


3. Open the custom file browser and select the media file




4. click the import button and choose a txt file as a description for the image.



5. User can add a video media file as well, choose the video you want to add.



6. Save the push pin to an XML file, looks like below.

<?xml version="1.0"?>
<MediaPinList xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <ListOfMediaPin>
    <MediaPin>
      <Longitude>136.470245160833</Longitude>
      <Latitude>-13.954014816082406</Latitude>
      <ListOfElement>
        <PushElement>
          <FilePath>C:\Users\Public\Pictures\Sample Pictures\Koala.jpg</FilePath>
          <Description />
        </PushElement>
        <PushElement>
          <FilePath>C:\Users\Public\Pictures\Sample Pictures\Penguins.jpg</FilePath>
          <Description></Description>
        </PushElement>
        <PushElement>
          <FilePath>C:\Users\Public\Pictures\Sample Pictures\Chrysanthemum.jpg</FilePath>
          <Description />
        </PushElement>
        <PushElement>
          <FilePath>C:\Users\Public\Videos\Sample Videos\Wildlife.wmv</FilePath>
          <Description />
        </PushElement>
        <PushElement>
          <FilePath>C:\Users\Public\Pictures\Sample Pictures\Lighthouse.jpg</FilePath>
          <Description />
        </PushElement>
      </ListOfElement>
    </MediaPin>
    <MediaPin>
      <Longitude>136.59384822539985</Longitude>
      <Latitude>-13.992668132658892</Latitude>
      <ListOfElement>
        <PushElement>
          <FilePath>C:\Users\Public\Pictures\Sample Pictures\Koala.jpg</FilePath>
          <Description></Description>
        </PushElement>
        <PushElement>
          <FilePath>C:\Users\Public\Videos\Sample Videos\Wildlife.wmv</FilePath>
          <Description />
        </PushElement>
      </ListOfElement>
    </MediaPin>
  </ListOfMediaPin>
</MediaPinList>

Surface project - part 11(C# Object to XML and vice versa)

The task in this week is to extend the custom media tag browser here to add multiple media file into one push pin. The idea is fairly easy, as the surface list box is general enough to composite any framework element as an item, the remain problem is how to persist the pushpin into local file. The supersivors preferred to store the pushpin into XML file. After look into the MSDN, I found that .net already provides a very flexible and powerful mechanism to serialize object into XML file and vice versa.

As the object, which represents the push pin, only contains string and double fields. To mark the object as [Serializable] is enough.

Following examples demonstrate the basic coding. The full example can be found here.

How to serialize object to XML

string path = "MySettings.xml";
XmlSerializer x = new XmlSerializer(settings.GetType());
StreamWriter writer = new StreamWriter(path);
x.Serialize(writer, settings);

How to deserialize XML back to object

MySettings settings = new MySettings();
string path = "MySettings.xml";
XmlSerializer x = new XmlSerializer(typeof(MySettings));
StreamReader reader = new StreamReader(path);
settings = (TVSettings)x.Deserialize(reader);

Saturday 17 March 2012

Surface project - part 10 (demo - mobile device interaction)

User case:

A mobile device explorer, user can explore and download images, which are stored on mobile device, through Surface screen. A case study is here, the conclusion is to use wifi instead of using bluetooth to establish the communication.

Solution:

This application is consisted with a master application on Surface and a slave application on my android device. The slave application is built using adobe air (Flex mobile project), so that it can be installed on android, ios, blackberry os and many other mobile systems

The application is using basic socket technology, the protocol for encode and decode data is self-made.


User can explore the file in mobile device through a custom file browser




When a file type is available to download, a download button will display beside the file



When a file is downloading, a progressbar will be displayed on the bottom of the browser



The downloaded file will be displayed in a scatter view

Surface project - part 9 (Case study - Bluetooth OR Wifi)

This time supervisors want to connect mobile device with Surface, user can explore and download images, which are stored on mobile device, through Surface screen.

They suggested to use bluetooth as the hardware. However, after did a case study on this user case, I found Wifi is more suitable.

Scenario

User connects his mobile device with Surface, then user can explore and download images, which are stored on mobile device, through Surface screen.

Problem

It is proved that this user case is impossible to achieve without an slave application running on mobile device. This slave application will listen on device port and send required data to Surface through wireless network.

Software Requirements

Two software must be built to achieve the user case. A master application on Surface and an slave application on mobile device

Hardware Solution

We can use Bluetooth or Wifi to build the communication. Listed below is the comparison.

Bluetooth
  • No managed library for .net platform
  • No unified bluetooth programming API on mobile system
  • Transfer speed is very slow
Wifi
  • Managed socket library for .net platform
  • Unified socket API provided by Adobe AIR for mobile system which includes Android, iOS, Blackberry and etc
  • Transfer speed is so far the best.

Conclusion

Bluetooth programming is not suitable for this project
  • Developer(me) need to master different programming API for different bluetooth stack.
  • Developer(me) need to master different programming language on different mobile system.
  • Developer(me) need to wrap unmanaged bluetooth library for .net platform.
  • Different slave application need to be built for each mobile system, although the functions are the same.
Wifi programming is the selected option
  • Developer(me) has sufficient knowledge on .net socket programming.
  • Adobe AIR provides unified socket API for mobile systems, which means only one slave application need to be built.
  • Developer(me) has experience on Adobe AIR

End of Case study

Surface project - part 8 (demo - Media tag browser)

This demo application demonstrate how to use drag drop in Surface application to assign metadata for different media file. The tutorial for drag drop programming can be found here. The post related to metadata is here.

I build a custom file browser to achieve this user case.


When you browse to image file folder, the tag information, which stored in image metadata will display besides each image.


User can drag the predefined onto the file panel.


User can change the main tag(red color) for each image file by clicking the tag they want to set, and the appearance of the pushpin would changed based on the setting.




When user start drag an image on to the map, there will be an suggestion line, which connects the touch point and the start point




Now the pushpin on the map will have different appearance based on the main tag value.


Surface project - part 7 (Media metadata)

After showing pushpin in the latest demo, supervisors now want assign different metadata to image files so that when user put a pin on the map, the appearance of the pin would be changed based on the image metadata.

A lot of resource can be found online, but I some kind of lost the address of the best post I have seen so far.

The bitmap class provided by WPF has include the ability to edit image metadata. But some issue exists when in place update is not achievable. Thus, as a solution, if in place update not successful, then we need to overwrite the whole image file.

 using (FileStream originalFile = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite))
            {
                BitmapDecoder theDecoder = BitmapDecoder.Create(originalFile, BitmapCreateOptions.None, BitmapCacheOption.OnLoad);
                InPlaceBitmapMetadataWriter metadata = theDecoder.Frames[0].CreateInPlaceBitmapMetadataWriter();

metadata.Keywords = new ReadOnlyCollection<string>(tags);

if (!metadata.TrySave())
                {
                    OverwriteImageMedia(filePath, tags);
                }
            }

 private static void OverwriteImageMedia(string filePath, Collection<string> tags)
       {
           using (FileStream originalFile = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
           {
               BitmapDecoder original = BitmapDecoder.Create(originalFile, BitmapCreateOptions.None, BitmapCacheOption.OnLoad);
               BitmapEncoder encoder;
               string extension = Path.GetExtension(filePath);
               if (extension == ".jpeg" || extension == ".jpg")
                   encoder = new JpegBitmapEncoder();
               else if (extension == ".png")
                   encoder = new PngBitmapEncoder();
               else
                   return;
               if (original.Frames[0] != null && original.Frames[0].Metadata != null)
               {
                   uint paddingAmount = 2048;
                   BitmapMetadata metadata = original.Frames[0].Metadata.Clone() as BitmapMetadata;
                   metadata.SetQuery("/app1/ifd/PaddingSchema:Padding", paddingAmount);
                   metadata.SetQuery("/app1/ifd/exif/PaddingSchema:Padding", paddingAmount);
                   metadata.SetQuery("/xmp/PaddingSchema:Padding", paddingAmount);
                   metadata.Keywords = new ReadOnlyCollection<string>(tags);
                   encoder.Frames.Add(BitmapFrame.Create(original.Frames[0], original.Frames[0].Thumbnail, metadata, original.Frames[0].ColorContexts));
               }
               originalFile.Close();
               using (Stream outputFile = File.Open ( filePath, FileMode.Create, FileAccess.ReadWrite,FileShare.ReadWrite))
               {
                   encoder.Save(outputFile);
               }
           }
       }

Tuesday 13 March 2012

Surface project - part 6 (demo application - map with custom media pushpin)

Just like what I said in last post that I need to implement a functionality for user to pin the video elements onto the map by themselves. With the help of DirectShow in .net, I am able to capture the video and save it local file system. Based on the WPF Bing Control, It is quite easy to build the required functionality. Bellowing is the captured screen on my laptop.




You can select the video file through file browser or you can record the video through the web camera.


First scenario, there is already a pre-made video file, you want to pin it onto the map.

Select the video from file system



The video file will be added into the video bar, and you can select the video and pin it onto the map



Long press a point on the map will add the pushpin.


Now, when you touch the pushpin, the video element will start at the right bottom of the pin





Second scenario, there is no pre-made video file, and you want to make one by yourself.

Click "Show Video Recorder" will display a panel similar to the previous post


After making the video, the video file will appear in the video bar as well, so that you can select it and pin it onto the map.

In next post, I am going to talk about some limitations(maybe challenges for developers) in Microsoft Surface.

Surface project - part 5 (video capture by using .net DirectShow)

After showing my first demo to supervisors, they were pleased with the user interface and wanted me to move on to the next scenario, the indigenous people want to record the video media by themselves through the web camera and pin these videos onto the map at custom defined locations.

Thus, the first task would be how to capture video stream from camera. There is a great article and a open source project, which wraps the native direct show code into .net platform.

However, after look into the project, I found that if you want to preview the live video from camera, you need use a windows form control. Thus, how to use windows form control in a WPF project. There is a tutorial provided by MSDN.

Following code is to make a simple WPF application, which capture video from camera and display on the screen as well.

1. add two rows into the grid, put the preview control on the first row, and a stack panel, which contains operation buttons, on the second row

<Grid>
        <Grid.RowDefinitions>
            <RowDefinition Height="480"/>
            <RowDefinition Height="50"/>
        </Grid.RowDefinitions>
        <WindowsFormsHost Grid.Row="0">
            <wf:Panel x:Name="ThePreviewControl" Width="640" Height="480"/>
        </WindowsFormsHost>
       
        <StackPanel Orientation="Horizontal" Grid.Row="1" HorizontalAlignment="Center"  VerticalAlignment="Center">
            <Button Name="StartBtn" Width="100" Click="StartBtn_Click" Margin="0,0,50,0" Content="Start" HorizontalAlignment="Center" VerticalAlignment="Center" />
            <Button Name="StopBtn"  Width="100" Click="StopBtn_Click" Content="Stop" Margin="0,0,50,0"/>
            <TextBox Name="FileTxt" Width="200" Text="c:\test.avi"/>
        </StackPanel>
       
    </Grid>


2. In the code behind file, add two fields for capture the video

        private Capture capture = null;
        private Filters filters = new Filters();


3. Inside the constructor, intitalize the capture object, and start preview
   
            capture = new Capture(filters.VideoInputDevices[0], filters.AudioInputDevices[0]);
            capture.VideoCompressor = filters.VideoCompressors[2]; //DV encode
            capture.AudioCompressor = filters.AudioCompressors[6]; // mp3

            //start preview
            capture.PreviewWindow = ThePreviewControl;


Now, when you start the application, you should see the image captured from camera displayed on the screen.



However, it has not finished yet. If you try to preview the video in a Surface project it won't work, as windows form control does not support move, scale or rotation. Thinking about the wrap class, it is some kind of restrict to windows form, but what I only need to preview is the raw image data for each frame. Here is an article, the author extended the wrap class by adding a callback after each frame. The limitation is that you still need to attach the preview control, otherwise, the callback won't work.

Saturday 10 March 2012

Surface project - part 4 (demo application - map with media)

With the help of WPF Bing Map Control, I am able to build a custom map quickly and effortlessly. I spent one night and one morning on making a small demo application, which will involve map and multi-media interaction. This is the captured screen on my laptop.


As the purpose is to preview some UI design, thus all the image, video and text resource are hard coded in the application. The map is pre-seperated into three area, top, middle and bottom indicated with different color.

I assigned three different events to those three area respectively. If you put a tagged object with value 01 on the top area, a video element will display on the screen.


If you put a tagged object with value 02 on the top area, a list of image elements will display on the screen.

Put a finger at the bottom area will display a text information about Groote.


Surface project - part 3 (WPF Bing Map Control)

After discussed with project supervisors, we decided that the first demo application would be a dummy map with multi-media interaction.  Thus, how to build the map ? The answer is WPF Bing Map Control.

The WPF Bing Map Control is quite flexible. It supports multi-layer, custom shapes and web services for location searching. To fully understand how to use this control in your application, you need to read the SDK documentation.

For my project, I certainly need to seperate the map into many custom area for users to play with. Listed below is how to define custom area for your map.

1. In order to use the Bing Maps WPF Control, you need a Bing Maps Key to authenticate your application.

2. Add a map control into your grid, I set the centre point at Groote, where the indigenous people live.
<m:Map Name="TheMap" Grid.Row="0" Center="-14, 136.53" ZoomLevel="11" Mode="Road"
                MouseDoubleClick="Map_MouseDoubleClick"
               CredentialsProvider="THE KEY">
     
</m:Map>
3. Add the "Map_MouseDoubleClick" event handler in cs file to record the wanted points and display it on the screen.
          private List<Location> listOfBorderLocation = new List<Location>();
          private void Map_MouseDoubleClick(object sender, MouseButtonEventArgs e)
         {
                      // Disables the default mouse double-click action.
                      e.Handled = true;
            //Get the mouse click coordinates
            Point mousePosition = e.GetPosition(this);
            //Convert the mouse coordinates to a locatoin on the map
            Location pinLocation = TheMap.ViewportPointToLocation(mousePosition);
            listOfBorderLocation.Add(pinLocation);
            TheTxt.Text += "\n" + pinLocation.Latitude + "," + pinLocation.Longitude;

  }
4. To display selected point and make a border based on these button, I put a scatter view on top the map
  <s:ScatterView Grid.Row="0" >
            <s:ScatterViewItem Center="200,400" Orientation="0" CanMove="False" CanRotate="False" CanScale="False">
                <TextBox Name="TheTxt"  Width="250"  Height="800" TextWrapping="Wrap"/>
            </s:ScatterViewItem>
            <s:ScatterViewItem Center="200,600" Orientation="0" CanMove="False" CanRotate="False" CanScale="False">
                <s:SurfaceButton Name="TheBtn" Content="Make Border" Click="TheBtn_Click"/>
            </s:ScatterViewItem>
   </s:ScatterView>
5. Add the "TheBtn_Click" event handler in cs file to make border based on selected point
private void TheBtn_Click(object sender, RoutedEventArgs e)
{
            if(listOfBorderLocation.Count >=2)
            {
                SetUpNewPolygon();
                TheTxt.Text += "\n....MAKE BORDER.....";
                foreach (Location l in listOfBorderLocation)
                    newPolygon.Locations.Add(l);
                polygonPointLayer.Children.Add(newPolygon);
                listOfBorderLocation.Clear();
            }
 }
        private MapPolygon newPolygon = null;
        private MapLayer polygonPointLayer = new MapLayer();
        private void SetUpNewPolygon()
        {
            newPolygon = new MapPolygon();
            // Defines the polygon fill details
            newPolygon.Locations = new LocationCollection();
            newPolygon.Fill = new SolidColorBrush(Colors.Blue);
            newPolygon.Stroke = new SolidColorBrush(Colors.Green);
            newPolygon.StrokeThickness = 3;
            newPolygon.Opacity = 0.5;
        }

6. Run the application and double click three different point on the map to make a custom area



Wednesday 7 March 2012

Surface project - part 1

Just finish ielts speaking test this afternoon, not good. Anyway, last week I started my finial semester project, which is to build an interactive board for the indigenous community using Microsoft Surface 2. There is a new Surface device in the lab and the supervisors want to know the capability of the device, the new technology, the programming interface, what can be done and what is the limitation.

After google a lot about the the hardware specification I found out the difference for surface compares to other touch enabled device. It is big, the model, a 40 inch device is made by Samsung. It supports more than 50 multi-touch points. It is meant to be played by multiple persons. A special vision system, it recognizes tagged object, which means you can attach a tag on anything and put it on the surface screen and the screen is able to pass the value to your software.

The first impression of the device is that the touch point is not very accurate. It is hard to type in the right password, so I used an external keyboard and mouse instead. There is only one pre-installed surface application, Microsoft Bing.  The user experience of the application is not good enough, especially when moving things around the screen, the lag is too obvious to forget. Also the screen material is some kind of preventing the movement of fingers, a big piece of anti-finger screen  protector maybe a good choice.

In the next part, I'll talk about some findings on SDK samples.

Surface project - part 2 (tagged object)

After read the MSDN documentation for SDK. I installed the SDK example on the device and tested them out. Microsoft provides two ways to programming Surface device. WPF and XNA.

There are some WPF controls for Surface. In my opinion, the most usable control is the scatter view. Anyway, the scatter view makes multi-person application possible. XNA can certainly be used in applications which are not game related, however, most of the time, WPF is sufficient to build a decent application.

Drag and drop action is no longer fancy for programming. So the first demo application I looked into is the Item Compare, which utilize the vision system to interact with physical object. The logic of the application is fairly simple. It reads the value of the tag as a key to search the corresponding content from an external XML file and then display the content to users. This is the basis which other applications can build on. Your applications can use tagged object as basic look up key, or you can implement different command based on tag value. For example, value 01 will trigger music function, value 02 will start video stream and put value 01 and 02 both on the screen will fire zoom in event.

Problem found about tagged object. Using Data Visualizer to test the value of your tagged object. You will find that if the tag is not flat enough or too thin(just a piece of paper) will cause the vision system to get incorrect tag value.

Problem found in other sample application.

1. The elapsed time which the system fired for touch point move event is too long to catch the speed of finger move. If you try ink canvas then you would find that if you draw a line using your finger in a very fast speed then there will be a lot of small gap among the line.

2. The user experience to scale, rotate, move controls(image, video etc) is not smooth and responsiveness enough. Especially the rotate and move action, the lag is too obvious to have decent feedback. However, it would be over killed to override the default animation function.


From next post I will start doing some small demo applications which are related to my project - interactive board for indigenous community. The proposed application is a big map which is the area of the indigenous land. So indigenous students can touch mountains, rivers to learn the history of their home. Thus, the current task is to find out how to build the map.