The memo blog.

About programming language and useful library.

Python: Memo(I)

| Comments

荒廢了過年這段時間,這次要來研究一下別人寫的Python程式,主要用來取得Google Map的影像,並且可以把它存起來。因為某些原因,所以需要對Google Map進行存取地圖,除了存取地圖,也需要來計算每個地圖的大小,詳細的參考網站可以參考Tiles à la Google Maps: Coordinates, Tile Bounds and Projection關於Google Map投影說明可以參考這邊。而在我想要進行的應用當中,主要有幾個部份,Google Map在設定大小的時候,所出現的圖片其大小都是固定的,所以在這邊想要把每個圖片都弄幾個狀態,然而Zoom inlevel設定在21左右,主要的應用,想要利用在某些用途上,總之,需要的功能為:

  • 取得使用者的GPS資訊
  • 使用者GPS資訊對應到Google Map的編號
  • Google Map每個區塊邊界取得

剩下的有些就是需要資料庫的幫忙了,接著最後目標就是要放到GAE上面,然後提供給Android來服務。那在這邊有兩個網路上找的Sample code,就來慢慢Trace

pyMapGrab.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
final = Image.new("RGB", (int(dx), int(dy)))
for x in range(cols):
    for y in range(rows):
        dxn = largura * (0.5 + x)
        dyn = altura * (0.5 + y)
        latn, lonn = pixelstolatlon(ulx + dxn, uly - dyn - bottom/2, zoom)
        position = ','.join((str(latn), str(lonn)))
        print x, y, position
        urlparams = urllib.urlencode({'center': position,
                                      'zoom': str(zoom),
                                      'size': '%dx%d' % (largura, alturaplus),
                                      'maptype': 'satellite',
                                      'sensor': 'false',
                                      'scale': scale})
        url = 'http://maps.google.com/maps/api/staticmap?' + urlparams
        f=urllib.urlopen(url)
        im=Image.open(StringIO.StringIO(f.read()))
        final.paste(im, (int(x*largura), int(y*altura)))
final.show()

這算是主要的main loop,首先就是要建立一張超大的空白圖,這就是使用PIL(Python Image Library),在安裝上還算方便。只是也同樣需要設定一下就是了,不然會找不到Image.new()。這一整段的程式,拆開來執行就是每次截取一小段圖片,後一直拼拼拼成一張大張的圖,這些圖當然都是Google Map提供的。在傳送網址的詳細參數,可以參考Google Map Static API,似乎有很多參數可以選擇…時間上的關係,需要來加緊趕工vuforia的程式了。之後再來慢慢補完這些要研究的東西。

Google App Engine: Memo(I)

| Comments

講了很久的Google App Engine(GAE),隨著今年的目標,一定要給他學會它,並且可以順利地寫出我要的功能!其實整個來講並沒有什麼太大的問題,唯一有一個要注意的就是。

最好每個檔案名稱千萬不要有大寫的存在!

只要記住這個,我想應該都沒有什麼問題。 其實在設定GAE的開發環境其實沒有什麼特別問題,我想這邊設定之類的就先跳過。那其實在安裝PyDev的時候,就已經有預設有GAE的開發選項,所以應該也不是什麼太大問題。Sample code也放在github上,基本上就是GAE的範例程式。那這邊先把過程了解的部份紀錄一下。首先有兩個Code一個pygmapengine.py另外一個app.yaml內容分別為:

pygmapengine.py
1
2
3
4
5
6
7
8
import webapp2

class MainPage(webapp2.RequestHandler):
    def get(self):
        self.response.headers['Content-Type'] = 'text/plain'
        self.response.write('pyGMapMain-Hello')

app = webapp2.WSGIApplication([('/', MainPage)], debug=True)

pygmapengine.py裡面,主要用的是webapp2的套件,細節怎樣使用我覺得以後再來慢慢研究好了,這部分先求會動就好。看起來就很像是建立一個MainPage然後裡面寫一個pyGmapMain-Hello的字串,應該就只有這樣而已。在app.yaml這部分似乎是比較重要的,因為在GAE讀取建立的時候,這是一個config檔。在這個app.yaml也有參考可以看。首先application:說明為The application identifier. This is the identifier you selected when you created the application in the Administration Console.,那可以知道的事情就是,就一定要跟你在GAE的管理界面是一致的,這樣在上傳的時候才對的起來。那我在GAE建立的也是pygmapengine所以沒問題,之後程式可以在這裡看到。version:似乎是版本?在更動的時候應該也會同時變動管理界面的版本才對,看說明也是如此。runtime:就所用的Python Interprer的版本,那在GAE上目前支援到2.7api_version:這部分好像是說,用的GAEAPI版本,在說明上目前好像都是1之後如果想要升級只要改改這個部份就好了。threadsafe:,看起來好像是什麼執行緒安全?說明是這樣Configures your application to use concurrent requests.應該是能不能支援cncurrent requests。最後終於handlers:這部份感覺有點複雜,因為要設定的東西似乎是網頁的路徑之類的,這部分留到未來慢慢研究好了。主要的說明在這邊Script_Handlers,似乎好像也沒有很多中文網站來討論這些設定問題。

pygmapengine.py
1
2
3
4
5
6
7
8
9
application: pygmapengine
version: 1
runtime: python27
api_version: 1
threadsafe: true

handlers:
- url: /.*
  script: pygmapengine.app

最後我想會直接開始寫個JSON範例用來存取吧!

Vuforia-II: Camera(III)

| Comments

在經過前面一篇Vuforia-II: Camera(II)我們trace到了InitQCARTask()接下來就是updateApplicationStatus(APPSTATUS_INIT_APP_AR);case APPSTATUS_INIT_APP_AR:當中,主要就這段initApplicationAR();那我們來看一下這段function裡面的內容。

cameraDemo.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
 /** Initializes AR application components. */
  private void initApplicationAR() {
      // Do application initialization in native code (e.g. registering
      // callbacks, etc.):
      initApplicationNative(mScreenWidth, mScreenHeight);

      // Create OpenGL ES view:
      int depthSize = 16;
      int stencilSize = 0;
      boolean translucent = QCAR.requiresAlpha();

      mGlView = new QCARSampleGLView(this);
      mGlView.init(mQCARFlags, translucent, depthSize, stencilSize);

      mRenderer = new cameraDemoRenderer();
      mRenderer.mActivity = this;
      mGlView.setRenderer(mRenderer);

      LayoutInflater inflater = LayoutInflater.from(this);
      mUILayout = (RelativeLayout) inflater.inflate(R.layout.activity_main,
              null, false);

      mUILayout.setVisibility(View.VISIBLE);
      mUILayout.setBackgroundColor(Color.BLACK);

      // Gets a reference to the loading dialog
      mLoadingDialogContainer = mUILayout
              .findViewById(R.id.loading_indicator);

      // Shows the loading indicator at start
      loadingDialogHandler.sendEmptyMessage(SHOW_LOADING_DIALOG);

      // Adds the inflated layout to the view
      addContentView(mUILayout, new LayoutParams(LayoutParams.MATCH_PARENT,
              LayoutParams.MATCH_PARENT));
  }

看來我們又遇到了native code就是initApplicationNative(mScreenWidth, mScreenHeight);在這段native code當中我們可以知道它需要的參數為mScreenWidth, mScreenHeight,這初始化內容從前面一篇提到的initApplication()階段已經取得了當前的ScreenWidth以及Height所以除非出錯,否則應該是非0然而在這段native code當中似乎進行了不少事情。我想整個最讓我覺得厲害的應該就是jclass activityClass = env->GetObjectClass(obj);這部分是個Android jni當中,jnijava class溝通的一個方式,但是主要也是用來initialization texture所用,首先在java clas當中,有一個code是關於texture的,主要是讀取。那在這邊要解釋起來似乎利用圖說明會比較好,之後可以開個這種教學文,圖文並茂。簡單說就是首先在javainitialization的時候,會有一段是private void loadTextures(),在onCreate()的階段出現的。這邊會去讀取apk底下的texture資料。接著還有另外一個function是用來取得總共讀取的幾個texture就是public int getTextureCount()這邊的意思就是,在jni底下,先去尋找呼叫這段jniclass然後尋找這個class有沒有methodgetTextureCount,第三個參數()I這個部份可以參考這邊這個Blog有針對這些參數進行說明。那我們可以知道它主要就是來去取得這個classmethod那這個class的作用就是取得總共有幾個texture接著繼續呼叫用來init進行繪圖所需要的texture這部分等以後對OpenGL比較熟悉的時候再來慢慢研究好了。總之我們知道這段是用來init繪圖用的texture

cameraDemo.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
JNIEXPORT void JNICALL
Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_initApplicationNative(
                            JNIEnv* env, jobject obj, jint width, jint height)
{
    LOG("Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_initApplicationNative");

    // Store screen dimensions
    screenWidth = width;
    screenHeight = height;

    // Handle to the activity class:
    jclass activityClass = env->GetObjectClass(obj);

    jmethodID getTextureCountMethodID = env->GetMethodID(activityClass,
                                                    "getTextureCount", "()I");
    if (getTextureCountMethodID == 0)
    {
        LOG("Function getTextureCount() not found.");
        return;
    }

    textureCount = env->CallIntMethod(obj, getTextureCountMethodID);
    if (!textureCount)
    {
        LOG("getTextureCount() returned zero.");
        return;
    }

    textures = new Texture*[textureCount];

    jmethodID getTextureMethodID = env->GetMethodID(activityClass,
        "getTexture", "(I)Lcom/qualcomm/QCARSamples/ImageTargets/Texture;");

    if (getTextureMethodID == 0)
    {
        LOG("Function getTexture() not found.");
        return;
    }

    // Register the textures
    for (int i = 0; i < textureCount; ++i)
    {

        jobject textureObject = env->CallObjectMethod(obj, getTextureMethodID, i);
        if (textureObject == NULL)
        {
            LOG("GetTexture() returned zero pointer");
            return;
        }

        textures[i] = Texture::create(env, textureObject);
    }
    LOG("Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_initApplicationNative finished");
}

inittexture之後底下的工作大部份就是在初始化一些視窗元件等等,還有把View加入到addContentView()當中,我想應該可以先跳過了。接著進入updateApplicationStatus(APPSTATUS_LOAD_TRACKER);,在case APPSTATUS_LOAD_TRACKER:當中,主要就是進行mLoadTrackerTask = new LoadTrackerTask(); mLoadTrackerTask.execute();另外開個Task來讀取Tracker,在這邊主要的程式碼為:

cameraDemo.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
    /** An async task to load the tracker data asynchronously. */
    private class LoadTrackerTask extends AsyncTask<Void, Integer, Boolean>
    {
        protected Boolean doInBackground(Void... params)
        {
            // Prevent the onDestroy() method to overlap:
            synchronized (mShutdownLock)
            {
                // Load the tracker data set:
                return (loadTrackerData() > 0);
            }
        }

        protected void onPostExecute(Boolean result)
        {
            DebugLog.LOGD("LoadTrackerTask::onPostExecute: execution " +
                        (result ? "successful" : "failed"));

            if (result)
            {
                // The stones and chips data set is now active:
                mIsStonesAndChipsDataSetActive = true;

                // Done loading the tracker, update application status:
                updateApplicationStatus(APPSTATUS_INITED);
            }
            else
            {
                // Create dialog box for display error:
                AlertDialog dialogError = new AlertDialog.Builder
                (
                    ImageTargets.this
                ).create();

                dialogError.setButton
                (
                    DialogInterface.BUTTON_POSITIVE,
                    "Close",
                    new DialogInterface.OnClickListener()
                    {
                        public void onClick(DialogInterface dialog, int which)
                        {
                            // Exiting application:
                            System.exit(1);
                        }
                    }
                );

                // Show dialog box with error message:
                dialogError.setMessage("Failed to load tracker data.");
                dialogError.show();
            }
        }
    }

在主要的protected Boolean doInBackground(Void... params)當中只有進行一件事情,就是return (loadTrackerData() > 0);,這邊又是另外一個native code,接著就一樣跳進去這邊來看看吧。

cameraDemo.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
JNIEXPORT int JNICALL
Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_loadTrackerData(JNIEnv *, jobject)
{
    LOG("Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_loadTrackerData");

    // Get the image tracker:
    QCAR::TrackerManager& trackerManager = QCAR::TrackerManager::getInstance();
    QCAR::ImageTracker* imageTracker = static_cast<QCAR::ImageTracker*>(
                    trackerManager.getTracker(QCAR::Tracker::IMAGE_TRACKER));
    if (imageTracker == NULL)
    {
        LOG("Failed to load tracking data set because the ImageTracker has not"
            " been initialized.");
        return 0;
    }

    // Create the data sets:
    dataSetStonesAndChips = imageTracker->createDataSet();
    if (dataSetStonesAndChips == 0)
    {
        LOG("Failed to create a new tracking data.");
        return 0;
    }

    dataSetTarmac = imageTracker->createDataSet();
    if (dataSetTarmac == 0)
    {
        LOG("Failed to create a new tracking data.");
        return 0;
    }

    // Load the data sets:
    if (!dataSetStonesAndChips->load("StonesAndChips.xml", QCAR::DataSet::STORAGE_APPRESOURCE))
    {
        LOG("Failed to load data set.");
        return 0;
    }

    if (!dataSetTarmac->load("Tarmac.xml", QCAR::DataSet::STORAGE_APPRESOURCE))
    //if (!dataSetTarmac->load("FunnyDB.xml", QCAR::DataSet::STORAGE_APPRESOURCE))
    {
        LOG("Failed to load data set.");
        return 0;
    }

    // Activate the data set:
    if (!imageTracker->activateDataSet(dataSetStonesAndChips))
    {
        LOG("Failed to activate data set.");
        return 0;
    }

    LOG("Successfully loaded and activated data set.");
    return 1;
}

似乎在進行TrackerManager操作的時候,都要先getInstance()似乎要先記住了。在這邊主要就是讀取外部的DataSet這些宣告在一開始的時候就有出現。似乎好像也沒有什麼特別的,如果要詳細說明應該只要查查API應該都有,這是DataSetAPI說明。我想這邊應該是可以重複利用才對。

1
2
QCAR::DataSet* dataSetStonesAndChips    = 0;
QCAR::DataSet* dataSetTarmac            = 0;

在看完這邊之後,接下來就是進入updateApplicationStatus(APPSTATUS_INITED);case APPSTATUS_INITED:當中,似乎是準備開始進入跑跑階段了。在這個階段當中,主要進行的onQCARInitializedNative();剩下的就是System.gc()和加入GLViewaddContentView()當中,接下來我們看一下onQCARInitializedNative()裡面是在執行哪些事情。

cameraDemo.cpp
1
2
3
4
5
6
7
8
9
10
JNIEXPORT void JNICALL
Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_onQCARInitializedNative(JNIEnv *, jobject)
{
    // Register the update callback where we handle the data set swap:
    QCAR::registerCallback(&updateCallback);

    // Comment in to enable tracking of up to 2 targets simultaneously and
    // split the work over multiple frames:
    // QCAR::setHint(QCAR::HINT_MAX_SIMULTANEOUS_IMAGE_TARGETS, 2);
}

結果這邊只是QCAR::registerCallback(&updateCallback);查查一下API,說明寫著void QCAR_API QCAR::registerCallback( UpdateCallback *object )以及 Registers an object to be called when new tracking data is available.然而這邊的updateCallback是一個class

1
ImageTargets_UpdateCallback updateCallback;
cameraDemo.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// Object to receive update callbacks from QCAR SDK
class ImageTargets_UpdateCallback : public QCAR::UpdateCallback
{
    virtual void QCAR_onUpdate(QCAR::State& /*state*/)
    {
        if (switchDataSetAsap)
        {
            switchDataSetAsap = false;

            // Get the image tracker:
            QCAR::TrackerManager& trackerManager = QCAR::TrackerManager::getInstance();
            QCAR::ImageTracker* imageTracker = static_cast<QCAR::ImageTracker*>(
                trackerManager.getTracker(QCAR::Tracker::IMAGE_TRACKER));
            if (imageTracker == 0 || dataSetStonesAndChips == 0 || dataSetTarmac == 0 ||
                imageTracker->getActiveDataSet() == 0)
            {
                LOG("Failed to switch data set.");
                return;
            }

            if (imageTracker->getActiveDataSet() == dataSetStonesAndChips)
            {
                imageTracker->deactivateDataSet(dataSetStonesAndChips);
                imageTracker->activateDataSet(dataSetTarmac);
            }
            else
            {
                imageTracker->deactivateDataSet(dataSetTarmac);
                imageTracker->activateDataSet(dataSetStonesAndChips);
            }
        }
    }
};

看了一下,原來是用來切換DataSet用的,在主要的class當中會有一段是呼叫menu然後選擇另外一個DataSet所使用的也是native code就只有呼叫:

cameraDemo.cpp
1
2
3
4
5
JNIEXPORT void JNICALL
Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_switchDatasetAsap(JNIEnv *, jobject)
{
    switchDataSetAsap = true;
}

所以註冊這個callback應該只是用來監控DataSet的變化。寫到這邊,突然覺得越寫越沒力氣。。。全部跑完之後好像是最後一個階段了updateApplicationStatus(APPSTATUS_CAMERA_RUNNING);之後的階段似乎都是在處理停止和結束的事情。在case APPSTATUS_CAMERA_RUNNING:當中,似乎只有進行startCamera();呼叫native code,接著就來看看這裡面到底執行哪些事情。

cameraDemo.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
JNIEXPORT void JNICALL
Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_startCamera(JNIEnv *,
                                                                         jobject)
{
    LOG("Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_startCamera");

    // Select the camera to open, set this to QCAR::CameraDevice::CAMERA_FRONT 
    // to activate the front camera instead.
    QCAR::CameraDevice::CAMERA camera = QCAR::CameraDevice::CAMERA_DEFAULT;

    // Initialize the camera:
    if (!QCAR::CameraDevice::getInstance().init(camera))
        return;

    // Configure the video background
    configureVideoBackground();

    // Select the default mode:
    if (!QCAR::CameraDevice::getInstance().selectVideoMode(
                                QCAR::CameraDevice::MODE_DEFAULT))
        return;

    // Start the camera:
    if (!QCAR::CameraDevice::getInstance().start())
        return;

    // Uncomment to enable flash
    //if(QCAR::CameraDevice::getInstance().setFlashTorchMode(true))
    //   LOG("IMAGE TARGETS : enabled torch");

    // Uncomment to enable infinity focus mode, or any other supported focus mode
    // See CameraDevice.h for supported focus modes
    //if(QCAR::CameraDevice::getInstance().setFocusMode(QCAR::CameraDevice::FOCUS_MODE_INFINITY))
    //   LOG("IMAGE TARGETS : enabled infinity focus");

    // Start the tracker:
    QCAR::TrackerManager& trackerManager = QCAR::TrackerManager::getInstance();
    QCAR::Tracker* imageTracker = trackerManager.getTracker(QCAR::Tracker::IMAGE_TRACKER);
    if(imageTracker != 0)
        imageTracker->start();
}

首先,先來查一下QCARCamera API,可以知道enum QCAR::CameraDevice::CAMERA型態為enum總共有三個參數:

1
2
3
4
Enumerator:
CAMERA_DEFAULT: Default camera device. Usually BACK.
CAMERA_BACK   : Rear facing camera.
CAMERA_FRONT  : Front facing camera.

好像也沒有什麼特別的,就是指定說我要後置鏡頭這樣。DEFAULT通常都是後置。接著就是取得Instance然後init()接下來又是呼叫另外一個native code老實說,感覺就像是在設定螢幕寬高的樣子,只是可能還有帶著3D物件所需要的參數?先看一下QCAR::VideoBackgroundConfig config;API好了。裡面主要有5個Public Attributes,分別為:

1
2
3
4
5
bool QCAR::VideoBackgroundConfig::mEnabled
bool QCAR::VideoBackgroundConfig::mSynchronous
Vec2I QCAR::VideoBackgroundConfig::mPosition
Vec2I QCAR::VideoBackgroundConfig::mSize
VIDEO_BACKGROUND_REFLECTION QCAR::VideoBackgroundConfig::mReflection

在這裡面也有說明可以參考,那在這段code主要的設定好像只有前面4個,在mEnabled的說明為:Enables/disables rendering of the video background.看樣子是設定要不要Rendering物件?接著就是mSynchronous,說明為Enables/disables synchronization of render and camera frame rate.又有一些補充說明If synchronization is enabled the SDK will attempt to match the rendering frame rate with the camera frame rate. This may result in a performance gain as potentially redundant render calls are avoided. Enabling this is not recommended if your augmented content needs to be animated at a rate higher than the rate at which the camera delivers frames.看樣子應該是rander的物件是不是要和frame rate同步。上面寫說,如果你的物件是個超級animated很會動,可能會很吃資源,建議不要同步。假設你的動畫需要100 fps你去和30 fps的同步應該會爆炸。在這邊因為用的是靜態茶壺,應該不會怎樣才對,除非茶壺需要自體轉動,不過好像也沒差?接著就是mPosition,說明為Relative position of the video background in the render target in pixels. Describes the offset of the center of video background to the center of the screen (viewport) in pixels. A value of (0,0) centers the video background, whereas a value of (-10,15) moves the video background 10 pixels to the left and 15 pixels upwards.看樣子應該就是offset就是通常來說,物件都會RenderTarget的中央,那這個offset就是可以偏移一下這樣。那在這邊是不要偏移,所以就都用0.0f了,f沒意外應該是代表float 接著好像就是設定mSize說明為Width and height of the video background in pixels.似乎就是background的寬高,然後是要pixels,底下說明了Using the device's screen size for this parameter scales the image to fullscreen. Notice that if the camera's aspect ratio is different than the screen's aspect ratio this will create a non-uniform stretched image.看樣子應該是要把camerasize能夠和screen size有著相同的比例?應該是,否則就不會去取得VideoMode了,這是VideoMode的API,果然就是取得Camera的,但是它只有三種Mode,老實說,也沒有出現每個MODE的解析度,不知道它是怎樣決定的,反正先知道它的功能就好了。最後終於結束了,這樣就可以動了?寫到這邊我只覺得,似乎可以拉出來變成Template的地方還挺多的,之後再想想要怎樣重寫或是自己寫一個這東西出來才可以比較知道整個從無到有的步驟流程該怎樣寫。

1
2
3
MODE_DEFAULT: Default camera mode.
MODE_OPTIMIZE_SPEED: Fast camera mode.
MODE_OPTIMIZE_QUALITY: High-quality camera mode.
cameraDemo.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
void
configureVideoBackground()
{
    // Get the default video mode:
    QCAR::CameraDevice& cameraDevice = QCAR::CameraDevice::getInstance();
    QCAR::VideoMode videoMode = cameraDevice.
                                getVideoMode(QCAR::CameraDevice::MODE_DEFAULT);


    // Configure the video background
    QCAR::VideoBackgroundConfig config;
    config.mEnabled = true;
    config.mSynchronous = true;
    config.mPosition.data[0] = 0.0f;
    config.mPosition.data[1] = 0.0f;

    if (isActivityInPortraitMode)
    {
        //LOG("configureVideoBackground PORTRAIT");
        config.mSize.data[0] = videoMode.mHeight
                                * (screenHeight / (float)videoMode.mWidth);
        config.mSize.data[1] = screenHeight;

        if(config.mSize.data[0] < screenWidth)
        {
            LOG("Correcting rendering background size to handle missmatch between screen and video aspect ratios.");
            config.mSize.data[0] = screenWidth;
            config.mSize.data[1] = screenWidth *
                              (videoMode.mWidth / (float)videoMode.mHeight);
        }
    }
    else
    {
        //LOG("configureVideoBackground LANDSCAPE");
        config.mSize.data[0] = screenWidth;
        config.mSize.data[1] = videoMode.mHeight
                            * (screenWidth / (float)videoMode.mWidth);

        if(config.mSize.data[1] < screenHeight)
        {
            LOG("Correcting rendering background size to handle missmatch between screen and video aspect ratios.");
            config.mSize.data[0] = screenHeight
                                * (videoMode.mWidth / (float)videoMode.mHeight);
            config.mSize.data[1] = screenHeight;
        }
    }

    LOG("Configure Video Background : Video (%d,%d), Screen (%d,%d), mSize (%d,%d)", videoMode.mWidth, videoMode.mHeight, screenWidth, screenHeight, config.mSize.data[0], config.mSize.data[1]);

    // Set the config:
    QCAR::Renderer::getInstance().setVideoBackgroundConfig(config);
}
cameraDemo.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
 /**
  * NOTE: this method is synchronized because of a potential concurrent
  * access by ImageTargets::onResume() and InitQCARTask::onPostExecute().
  */
  private synchronized void updateApplicationStatus(int appStatus) {
      // Exit if there is no change in status:
      if (mAppStatus == appStatus)
          return;

      // Store new status value:
      mAppStatus = appStatus;

      // Execute application state-specific actions:
      switch (mAppStatus) {
      case APPSTATUS_INIT_APP:
          // Initialize application elements that do not rely on QCAR
          // initialization:
          initApplication();

          // Proceed to next application initialization status:
          updateApplicationStatus(APPSTATUS_INIT_QCAR);
          break;

      case APPSTATUS_INIT_QCAR:
          // Initialize QCAR SDK asynchronously to avoid blocking the
          // main (UI) thread.
          //
          // NOTE: This task instance must be created and invoked on the
          // UI thread and it can be executed only once!
          try {
              mInitQCARTask = new InitQCARTask();
              mInitQCARTask.execute();
          } catch (Exception e) {
              DebugLog.LOGE("cameraDemo", "Initializing QCAR SDK failed");
          }
          break;

      case APPSTATUS_INIT_TRACKER:
          // Initialize the ImageTracker:
          if (initTracker() > 0) {
              // Proceed to next application initialization status:
              updateApplicationStatus(APPSTATUS_INIT_APP_AR);
          }
          break;

      case APPSTATUS_INIT_APP_AR:
          // Initialize Augmented Reality-specific application elements
          // that may rely on the fact that the QCAR SDK has been
          // already initialized:
          initApplicationAR();

          // Proceed to next application initialization status:
          updateApplicationStatus(APPSTATUS_LOAD_TRACKER);
          break;

      case APPSTATUS_LOAD_TRACKER:
          // Load the tracking data set:
          //
          // NOTE: This task instance must be created and invoked on the
          // UI thread and it can be executed only once!
          try {
              mLoadTrackerTask = new LoadTrackerTask();
              mLoadTrackerTask.execute();
          } catch (Exception e) {
              DebugLog.LOGE("cameraDemo", "Loading tracking data set failed");
          }
          break;

      case APPSTATUS_INITED:
          // Hint to the virtual machine that it would be a good time to
          // run the garbage collector:
          //
          // NOTE: This is only a hint. There is no guarantee that the
          // garbage collector will actually be run.
          System.gc();

          // Native post initialization:
          onQCARInitializedNative();

          // Activate the renderer:
          mRenderer.mIsActive = true;

          // Now add the GL surface view. It is important
          // that the OpenGL ES surface view gets added
          // BEFORE the camera is started and video
          // background is configured.
          addContentView(mGlView, new LayoutParams(LayoutParams.MATCH_PARENT,
                  LayoutParams.MATCH_PARENT));

          // Sets the UILayout to be drawn in front of the camera
          // mUILayout.bringToFront();

          // Start the camera:
          updateApplicationStatus(APPSTATUS_CAMERA_RUNNING);

          break;

      case APPSTATUS_CAMERA_STOPPED:
          // Call the native function to stop the camera:
          stopCamera();
          break;

      case APPSTATUS_CAMERA_RUNNING:
          // Call the native function to start the camera:
          startCamera();

          // Hides the Loading Dialog
          // loadingDialogHandler.sendEmptyMessage(HIDE_LOADING_DIALOG);

          // Sets the layout background to transparent
          // mUILayout.setBackgroundColor(Color.TRANSPARENT);

          // Set continuous auto-focus if supported by the device,
          // otherwise default back to regular auto-focus mode.
          // This will be activated by a tap to the screen in this
          // application.
          if (!setFocusMode(FOCUS_MODE_CONTINUOUS_AUTO)) {
              mContAutofocus = false;
              setFocusMode(FOCUS_MODE_NORMAL);
          } else {
              mContAutofocus = true;
          }
          break;

      default:
          throw new RuntimeException("Invalid application state");
      }
  }

Vuforia-II: Camera(II)

| Comments

經過比對vuforia的Sample code更加確定了幾件事情,就是很多基本的Code都是可以重複用的,例如:Initial的流程或是jni的設定等,大部份都是一樣的。除了特別用途需要用到特別的功能之外,基本上架構都是一樣。基於這個原因,我開始把可以重復用的部分抽離出來,建立一個Template,以後在用的時候就可以直接拿這個Template來新增、修改或是弄些比較有趣的Demo。我也將我所完成的Code放到github上,也趁機熟悉一下git版本控制。老實說一個main_activity就要1xxx行,對我來講,真是有點看不下去。大致上知道流程,所以趁著還記憶猶新的時候,開始備忘一下。

cameraDemo.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
 @Override
  protected void onCreate(Bundle savedInstanceState) {
      DebugLog.LOGD("cameraDemo", "cameraDemo::onCreate");
      super.onCreate(savedInstanceState);
      // Load any sample specific textures:
      // mTextures = new Vector<Texture>();
      // loadTextures();
      // Query the QCAR initialization flags:
      mQCARFlags = getInitializationFlags();
      // Creates the GestureDetector listener for processing double tap
      // mGestureDetector = new GestureDetector(this, new GestureListener());
      // Update the application status to start initializing application:
      updateApplicationStatus(APPSTATUS_INIT_APP);
  }

首先我們針對onCreate的程式進入點當中來進行說明。在這邊當中,其實有些部分可以先行拿掉,如果只是要開啟Camera的功能的話,首先前面的DebugLogsuper.onCreate等說明應該不用了。在loadTextures()當中,就是(前一篇Vuforia-II Camera)[http://cychiang.github.com/blog/2013/02/04/vuforia-ii-camera/]所提到的對於電腦圖學我需要的一些東西,詳細可以參考一下Wiki的說明,簡單說就是貼皮膚上去!就像穿衣服。而在getInitializationFlags();當中,主要是偵測Device支援的OpenGL ES的版本,那這部分會再呼叫Native code來解決,這部分未來會針對Native code部分進行說明。

cameraDemo.java
1
2
3
4
5
6
7
8
9
10
11
12
13
 /** Configure QCAR with the desired version of OpenGL ES. */
  private int getInitializationFlags() {
      int flags = 0;

      // Query the native code:
      if (getOpenGlEsVersionNative() == 1) {
          flags = QCAR.GL_11;
      } else {
          flags = QCAR.GL_20;
      }

      return flags;
  }

最後會去呼叫updateApplicationStatus()這部份是用來更新目前的程式狀態,Demo程式寫的挺詳細的,把每個狀態該進行哪些工作都說明的很清楚。接著我們來看updateApplicationStatus()的實作內容。有點長,但大部份都是註解,有時候真的不得不佩服一些公司的Code註解寫的相當清楚Trace沒煩惱。在程式這邊的狀態定義為:

1
2
3
4
5
6
7
8
9
10
// Application status constants:
private static final int APPSTATUS_UNINITED = -1;
private static final int APPSTATUS_INIT_APP = 0;
private static final int APPSTATUS_INIT_QCAR = 1;
private static final int APPSTATUS_INIT_TRACKER = 2;
private static final int APPSTATUS_INIT_APP_AR = 3;
private static final int APPSTATUS_LOAD_TRACKER = 4;
private static final int APPSTATUS_INITED = 5;
private static final int APPSTATUS_CAMERA_STOPPED = 6;
private static final int APPSTATUS_CAMERA_RUNNING = 7;

先釐清之後會比較了解之後程式的走向。在onCreate()當中,傳入的參數為APPSTATUS_INIT_APP,也就是0對應到接下來的updateApplicationStatus(int appStatus)當中會是在case APPSTATUS_INIT_APP:的區段裡面,而這裡面的執行內容是先執行initApplication()完成之後,執行updateApplicationSatus(APPSTATUS_INIT_QCAR),也就是說,執行完成後會再重新進入這個function而帶入的參數是APPSTATUS_INIT_QCAR接下來,我們看一下initApplication()在進行哪些事情。

cameraDemo.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
 /** Initialize application GUI elements that are not related to AR. */
  private void initApplication() {
      // Set the screen orientation:
      // NOTE: Use SCREEN_ORIENTATION_LANDSCAPE or SCREEN_ORIENTATION_PORTRAIT
      // to lock the screen orientation for this activity.
      int screenOrientation = ActivityInfo.SCREEN_ORIENTATION_SENSOR;

      // This is necessary for enabling AutoRotation in the Augmented View
      if (screenOrientation == ActivityInfo.SCREEN_ORIENTATION_SENSOR) {
          // NOTE: We use reflection here to see if the current platform
          // supports the full sensor mode (available only on Gingerbread
          // and above.
          try {
              // SCREEN_ORIENTATION_FULL_SENSOR is required to allow all
              // 4 screen rotations if API level >= 9:
              Field fullSensorField = ActivityInfo.class
                      .getField("SCREEN_ORIENTATION_FULL_SENSOR");
              screenOrientation = fullSensorField.getInt(null);
          } catch (NoSuchFieldException e) {
              // App is running on API level < 9, do nothing.
          } catch (Exception e) {
              e.printStackTrace();
          }
      }

      // Apply screen orientation
      setRequestedOrientation(screenOrientation);

      updateActivityOrientation();

      // Query display dimensions:
      storeScreenDimensions();

      // As long as this window is visible to the user, keep the device's
      // screen turned on and bright:
      getWindow().setFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON,
              WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
  }

在這段function似乎沒有進行什麼特別的init,全不可以看到的就是偵測目前裝置的方向就是你是拿直的還是橫的情況,而也會有一個function來記錄目前screen的狀態,應該是可以不用特別去了解細節。可能是因為要Render東西需要考慮方向性問題吧!好了,既然知道這邊是確定螢幕的方向內容,那可以先繼續下一步,也就是updateApplicationStatus(APPSTATUS_INIT_QCAR);,這邊在case APPSTATUS_INIT_QCAR當中主要是建立一個Task而這個TaskInitQCARTask(),接下來就進入我們InitQCARTask()來進行說明。然而需要注意一點就是,這邊在實現上為另外建立一個Task所以這邊是先mInitQCARTask = new InitQCARTask();接著mInitQCARTask.execute()

cameraDemo.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
 /** An async task to initialize QCAR asynchronously. */
  private class InitQCARTask extends AsyncTask<Void, Integer, Boolean> {
      // Initialize with invalid value:
      private int mProgressValue = -1;

      protected Boolean doInBackground(Void... params) {
          // Prevent the onDestroy() method to overlap with initialization:
          synchronized (mShutdownLock) {
              QCAR.setInitParameters(cameraDemo.this, mQCARFlags);

              do {
                  // QCAR.init() blocks until an initialization step is
                  // complete, then it proceeds to the next step and reports
                  // progress in percents (0 ... 100%).
                  // If QCAR.init() returns -1, it indicates an error.
                  // Initialization is done when progress has reached 100%.
                  mProgressValue = QCAR.init();

                  // Publish the progress value:
                  publishProgress(mProgressValue);

                  // We check whether the task has been canceled in the
                  // meantime (by calling AsyncTask.cancel(true)).
                  // and bail out if it has, thus stopping this thread.
                  // This is necessary as the AsyncTask will run to completion
                  // regardless of the status of the component that
                  // started is.
              } while (!isCancelled() && mProgressValue >= 0
                      && mProgressValue < 100);

              return (mProgressValue > 0);
          }
      }

      protected void onProgressUpdate(Integer... values) {
          // Do something with the progress value "values[0]", e.g. update
          // splash screen, progress bar, etc.
      }

      protected void onPostExecute(Boolean result) {
          // Done initializing QCAR, proceed to next application
          // initialization status:
          if (result) {
              DebugLog.LOGD("cameraDemo",
                      "InitQCARTask::onPostExecute: QCAR "
                              + "initialization successful");

              updateApplicationStatus(APPSTATUS_INIT_TRACKER);
          } else {
              // Create dialog box for display error:
              AlertDialog dialogError = new AlertDialog.Builder(
                      cameraDemo.this).create();

              dialogError.setButton(DialogInterface.BUTTON_POSITIVE, "Close",
                      new DialogInterface.OnClickListener() {
                          public void onClick(DialogInterface dialog,
                                  int which) {
                              // Exiting application:
                              System.exit(1);
                          }
                      });

              String logMessage;

              // NOTE: Check if initialization failed because the device is
              // not supported. At this point the user should be informed
              // with a message.
              if (mProgressValue == QCAR.INIT_DEVICE_NOT_SUPPORTED) {
                  logMessage = "Failed to initialize QCAR because this "
                          + "device is not supported.";
              } else {
                  logMessage = "Failed to initialize QCAR.";
              }

              // Log error:
              DebugLog.LOGE("cameraDemo", "InitQCARTask::onPostExecute: "
                      + logMessage + " Exiting.");

              // Show dialog box with error message:
              dialogError.setMessage(logMessage);
              dialogError.show();
          }
      }
  }
  

在這段InitQCARTask()主要的protected Boolean doInBackground(Void... params)裡面,就兩件事情比較重要QCAR.setInitParameters(cameraDemo.this, mQCARFlags);QCAR.init()但是!QCAR.setInitParameters()有點奇怪,因為在官方API當中裡面參數只有一個,這是官方API寫的int QCAR_API QCAR::setInitParameters ( int flags )所以讓我覺得有點奇怪,為什麼會多一個Activity的參數,不過既然都需要,就塞進去吧。最後還需要一個QCAR.init()在這邊官方API說明為Initializes QCAR.詳細怎樣Initializes也沒有說明很清楚。不過有一個iOS的說明,整段說明為:iOS: Called to initialize QCAR. Initialization is progressive, so this function should be called repeatedly until it returns 100 or a negative value. Returns an integer representing the percentage complete (negative on error).,所以,看來就是這一個doInBackground(Void… params)會一直執行直到mProgressValue為100!整個就是讀取條,很酷。那基本上這邊就是只有進行Initialization QCAR的功用,最後完成時會進入到onPostExecute(),然後呼叫updateApplicationStatus(APPSTATUS_INIT_TRACKER);這邊又是另外一段function的開始,我在想,不要搞Task直接用應該也是可以,只是可能比較不保險!?接下來我們看一下case APPSTATUS_INIT_TRACKER:主要有一個function就是initTracker()這個是native code不得已只好先跳到native codetrace

cameraDemo.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
JNIEXPORT int JNICALL
Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_initTracker(JNIEnv *, jobject)
{
    LOG("Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_initTracker");

    // Initialize the image tracker:
    QCAR::TrackerManager& trackerManager = QCAR::TrackerManager::getInstance();
    QCAR::Tracker* tracker = trackerManager.initTracker(QCAR::Tracker::IMAGE_TRACKER);
    if (tracker == NULL)
    {
        LOG("Failed to initialize ImageTracker.");
        return 0;
    }

    LOG("Successfully initialized ImageTracker.");
    return 1;
}

native code當中,主要有兩個用來InitializefunctionQCAR::TrackerManagerQCAR::Tracker* tracker在這邊參考一下官方的API這個,雖然官方API有些說明很兩光或是不清楚,但是大致上還是可以理解。首先在QCAR::TrackerManager當中,主要就是需要取得一個TrackerManagerInstance()接著針對這個Instance進行操作。接著設定Tracker用來追蹤所偵測到的目標而在所追蹤的目標也需要初始化它的類型,這就是trackerManager.initTracker()所做的事情。在這邊當中只有兩種IMAGE_TRACKER-Tracks ImageTargets and MultiTargets.以及MARKER_TRACKER-Tracks Markers.這兩種,那關於Image TargetsMultiTargets以及Tracks MarkersVuforia-I有說明。OK,這邊想要補充一下什麼是Instance這是軟工的一個名詞,定義說明如下。

1
Instance:An instanceis an object created from a class. The class describes the (behavior and information)structure of the instance, while the current state of the instance is defined by the operations performed on the instance.

節錄資料來源:搞笑軟工

在完成這些動作之後,接著進入updateApplicationStatus(APPSTATUS_INIT_APP_AR);今天寫到這邊,改天繼續說明和Trace

cameraDemo.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
 /**
  * NOTE: this method is synchronized because of a potential concurrent
  * access by ImageTargets::onResume() and InitQCARTask::onPostExecute().
  */
  private synchronized void updateApplicationStatus(int appStatus) {
      // Exit if there is no change in status:
      if (mAppStatus == appStatus)
          return;

      // Store new status value:
      mAppStatus = appStatus;

      // Execute application state-specific actions:
      switch (mAppStatus) {
      case APPSTATUS_INIT_APP:
          // Initialize application elements that do not rely on QCAR
          // initialization:
          initApplication();

          // Proceed to next application initialization status:
          updateApplicationStatus(APPSTATUS_INIT_QCAR);
          break;

      case APPSTATUS_INIT_QCAR:
          // Initialize QCAR SDK asynchronously to avoid blocking the
          // main (UI) thread.
          //
          // NOTE: This task instance must be created and invoked on the
          // UI thread and it can be executed only once!
          try {
              mInitQCARTask = new InitQCARTask();
              mInitQCARTask.execute();
          } catch (Exception e) {
              DebugLog.LOGE("cameraDemo", "Initializing QCAR SDK failed");
          }
          break;

      case APPSTATUS_INIT_TRACKER:
          // Initialize the ImageTracker:
          if (initTracker() > 0) {
              // Proceed to next application initialization status:
              updateApplicationStatus(APPSTATUS_INIT_APP_AR);
          }
          break;

      case APPSTATUS_INIT_APP_AR:
          // Initialize Augmented Reality-specific application elements
          // that may rely on the fact that the QCAR SDK has been
          // already initialized:
          initApplicationAR();

          // Proceed to next application initialization status:
          updateApplicationStatus(APPSTATUS_LOAD_TRACKER);
          break;

      case APPSTATUS_LOAD_TRACKER:
          // Load the tracking data set:
          //
          // NOTE: This task instance must be created and invoked on the
          // UI thread and it can be executed only once!
          try {
              mLoadTrackerTask = new LoadTrackerTask();
              mLoadTrackerTask.execute();
          } catch (Exception e) {
              DebugLog.LOGE("cameraDemo", "Loading tracking data set failed");
          }
          break;

      case APPSTATUS_INITED:
          // Hint to the virtual machine that it would be a good time to
          // run the garbage collector:
          //
          // NOTE: This is only a hint. There is no guarantee that the
          // garbage collector will actually be run.
          System.gc();

          // Native post initialization:
          onQCARInitializedNative();

          // Activate the renderer:
          mRenderer.mIsActive = true;

          // Now add the GL surface view. It is important
          // that the OpenGL ES surface view gets added
          // BEFORE the camera is started and video
          // background is configured.
          addContentView(mGlView, new LayoutParams(LayoutParams.MATCH_PARENT,
                  LayoutParams.MATCH_PARENT));

          // Sets the UILayout to be drawn in front of the camera
          // mUILayout.bringToFront();

          // Start the camera:
          updateApplicationStatus(APPSTATUS_CAMERA_RUNNING);

          break;

      case APPSTATUS_CAMERA_STOPPED:
          // Call the native function to stop the camera:
          stopCamera();
          break;

      case APPSTATUS_CAMERA_RUNNING:
          // Call the native function to start the camera:
          startCamera();

          // Hides the Loading Dialog
          // loadingDialogHandler.sendEmptyMessage(HIDE_LOADING_DIALOG);

          // Sets the layout background to transparent
          // mUILayout.setBackgroundColor(Color.TRANSPARENT);

          // Set continuous auto-focus if supported by the device,
          // otherwise default back to regular auto-focus mode.
          // This will be activated by a tap to the screen in this
          // application.
          if (!setFocusMode(FOCUS_MODE_CONTINUOUS_AUTO)) {
              mContAutofocus = false;
              setFocusMode(FOCUS_MODE_NORMAL);
          } else {
              mContAutofocus = true;
          }
          break;

      default:
          throw new RuntimeException("Invalid application state");
      }
  }

Vuforia-II: Camera

| Comments

為了能夠完全掌握vuforia怎樣在Android上使用,個人的風格就是逐步拆解範例程式的Code然後做一個通盤了解,不僅止於在能修改這上面。在vuforia的範例程式當中,主要有分幾個部分:

  • 版權聲明
  • 讀取畫面
  • 開啟camera
  • 執行AR演算法
  • 是否有找到標籤,有則Render物件沒有則跳過

以上大概是每個範例程式所會進行的事情,那在這當中最基本的大概就是開啟Camera吧!在建立完專案之後,先加入一下使用Camera的權限到Android程式當中。

AndroidManifest.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.example.vuforiacamera"
    android:versionCode="1"
    android:versionName="1.0" >

    <uses-sdk
        android:minSdkVersion="15"
        android:targetSdkVersion="15" />
    <!-- Add permission of camera -->
    <uses-feature android:name="android.hardware.camera" />
    <uses-permission android:name="android.permission.CAMERA" />


    <application
        android:allowBackup="true"
        android:icon="@drawable/ic_launcher"
        android:label="@string/app_name"
        android:theme="@style/AppTheme" >
        <activity
            android:name="com.example.vuforiacamera.MainActivity"
            android:configChanges="orientation|keyboardHidden"
            android:label="@string/app_name" >
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />

                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>

    </application>

</manifest>

根據之前分析的範例程式的結果,裡面分為幾個步驟:Init_QCAR->Init_Tracker->InitApp_AR->InitLoader_Tracker詳細過程內容,看看之後有沒有興趣補上,以上名字不一定正確因為是自己寫的簡解XD,但是都在vuforia範例程式當中可以找到。那在Init_QCAR以及Init_Tracker當中,主要是讀取QCARLibrary也就是vuforiaInit_Tracker當中,為讀取資料庫,資料庫是什麼東西?可以參考Vuforia-I的文章。 在這範例程式當中,使用了OpenGL ES來當作繪圖的主要工具之一,其中也有一大段的Code在偵測平台支援OpenGL ES 1.1 or OpenGL ES 2.0在Camera讀取影像並且貼到GLSurfaceView這部分是必需要能釐清的部分,而這中間包含了一些OpenGL ES的寫法,說真的也不是很了解XD。總覺得需要寫很久…就分部分寫吧!先來了解一下Sample Code它所提寫的QCARSampleGLView.java,其實裡面有很多設定完全不知所以然,在這邊就慢慢解說和了解這Code所需要完成的事情。了解之後也比較好釐清一個OpenGL ES程式該怎樣寫。整段貼上,連同聲明!

QCARSampleGLView.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
/*==============================================================================
            Copyright (c) 2010-2012 QUALCOMM Austria Research Center GmbH.
            All Rights Reserved.
            Qualcomm Confidential and Proprietary
==============================================================================*/

package com.qualcomm.QCARSamples.ImageTargets;

import com.qualcomm.QCAR.QCAR;

import javax.microedition.khronos.egl.EGL10;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.egl.EGLContext;
import javax.microedition.khronos.egl.EGLDisplay;

import android.content.Context;
import android.graphics.PixelFormat;
import android.opengl.GLSurfaceView;


/** QCARSampleGLView is a support class for the QCAR samples applications.
 *
 *  Responsible for setting up and configuring the OpenGL surface view.
 *
 * */
public class QCARSampleGLView extends GLSurfaceView
{
    private static boolean mUseOpenGLES2 = true;

    /** Constructor. */
    public QCARSampleGLView(Context context)
    {
        super(context);
    }


    /** Initialization. */
    public void init(int flags, boolean translucent, int depth, int stencil)
    {
        // By default GLSurfaceView tries to find a surface that is as close
        // as possible to a 16-bit RGB frame buffer with a 16-bit depth buffer.
        // This function can override the default values and set custom values.

        // Extract OpenGL ES version from flags
        mUseOpenGLES2 = (flags & QCAR.GL_20) != 0;

        // By default, GLSurfaceView() creates a RGB_565 opaque surface.
        // If we want a translucent one, we should change the surface's
        // format here, using PixelFormat.TRANSLUCENT for GL Surfaces
        // is interpreted as any 32-bit surface with alpha by SurfaceFlinger.

        DebugLog.LOGI("Using OpenGL ES " + (mUseOpenGLES2 ? "2.0" : "1.x"));
        DebugLog.LOGI("Using " + (translucent ? "translucent" : "opaque") +
            " GLView, depth buffer size: " + depth + ", stencil size: " +
            stencil);

        // If required set translucent format to allow camera image to
        // show through in the background
        if (translucent)
        {
            this.getHolder().setFormat(PixelFormat.TRANSLUCENT);
        }

        // Setup the context factory for 1.x / 2.0 rendering
        setEGLContextFactory(new ContextFactory());

        // We need to choose an EGLConfig that matches the format of
        // our surface exactly. This is going to be done in our
        // custom config chooser. See ConfigChooser class definition
        // below.
        setEGLConfigChooser( translucent ?
                             new ConfigChooser(8, 8, 8, 8, depth, stencil) :
                             new ConfigChooser(5, 6, 5, 0, depth, stencil) );
    }


    /** Creates OpenGL contexts. */
    private static class ContextFactory implements
        GLSurfaceView.EGLContextFactory
    {
        private static int EGL_CONTEXT_CLIENT_VERSION = 0x3098;
        public EGLContext createContext(EGL10 egl, EGLDisplay display,
            EGLConfig eglConfig)
        {
            EGLContext context;
            if (mUseOpenGLES2)
            {
                DebugLog.LOGI("Creating OpenGL ES 2.0 context");
                checkEglError("Before eglCreateContext", egl);
                int[] attrib_list_gl20 = {EGL_CONTEXT_CLIENT_VERSION, 2,
                    EGL10.EGL_NONE};
                context = egl.eglCreateContext(display, eglConfig,
                    EGL10.EGL_NO_CONTEXT, attrib_list_gl20);
            }
            else
            {
                DebugLog.LOGI("Creating OpenGL ES 1.x context");
                checkEglError("Before eglCreateContext", egl);
                int[] attrib_list_gl1x = {EGL_CONTEXT_CLIENT_VERSION, 1,
                    EGL10.EGL_NONE};
                context = egl.eglCreateContext(display, eglConfig,
                    EGL10.EGL_NO_CONTEXT, attrib_list_gl1x);
            }

            checkEglError("After eglCreateContext", egl);
            return context;
        }

        public void destroyContext(EGL10 egl, EGLDisplay display,
            EGLContext context)
        {
            egl.eglDestroyContext(display, context);
        }
    }


    /** Checks the OpenGL error. */
    private static void checkEglError(String prompt, EGL10 egl)
    {
        int error;
        while ((error = egl.eglGetError()) != EGL10.EGL_SUCCESS)
        {
            DebugLog.LOGE(String.format("%s: EGL error: 0x%x", prompt, error));
        }
    }


    /** The config chooser. */
    private static class ConfigChooser implements GLSurfaceView.EGLConfigChooser
    {
        public ConfigChooser(int r, int g, int b, int a, int depth, int stencil)
        {
            mRedSize = r;
            mGreenSize = g;
            mBlueSize = b;
            mAlphaSize = a;
            mDepthSize = depth;
            mStencilSize = stencil;
        }


        private EGLConfig getMatchingConfig(EGL10 egl, EGLDisplay display,
            int[] configAttribs)
        {
            // Get the number of minimally matching EGL configurations
            int[] num_config = new int[1];
            egl.eglChooseConfig(display, configAttribs, null, 0, num_config);

            int numConfigs = num_config[0];
            if (numConfigs <= 0)
                throw new IllegalArgumentException("No matching EGL configs");

            // Allocate then read the array of minimally matching EGL configs
            EGLConfig[] configs = new EGLConfig[numConfigs];
            egl.eglChooseConfig(display, configAttribs, configs, numConfigs,
                num_config);

            // Now return the "best" one
            return chooseConfig(egl, display, configs);
        }


        public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display)
        {
            if (mUseOpenGLES2)
            {
                // This EGL config specification is used to specify 2.0
                // rendering. We use a minimum size of 4 bits for
                // red/green/blue, but will perform actual matching in
                // chooseConfig() below.
                final int EGL_OPENGL_ES2_BIT = 0x0004;
                final int[] s_configAttribs_gl20 =
                {
                    EGL10.EGL_RED_SIZE, 4,
                    EGL10.EGL_GREEN_SIZE, 4,
                    EGL10.EGL_BLUE_SIZE, 4,
                    EGL10.EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
                    EGL10.EGL_NONE
                };

                return getMatchingConfig(egl, display, s_configAttribs_gl20);
            }
            else
            {
                final int EGL_OPENGL_ES1X_BIT = 0x0001;
                final int[] s_configAttribs_gl1x =
                {
                    EGL10.EGL_RED_SIZE, 5,
                    EGL10.EGL_GREEN_SIZE, 6,
                    EGL10.EGL_BLUE_SIZE, 5,
                    EGL10.EGL_RENDERABLE_TYPE, EGL_OPENGL_ES1X_BIT,
                    EGL10.EGL_NONE
                };

                return getMatchingConfig(egl, display, s_configAttribs_gl1x);
            }
        }


        public EGLConfig chooseConfig(
            EGL10 egl, EGLDisplay display, EGLConfig[] configs)
        {
            for(EGLConfig config : configs)
            {
                int d = findConfigAttrib(egl, display, config,
                        EGL10.EGL_DEPTH_SIZE, 0);
                int s = findConfigAttrib(egl, display, config,
                        EGL10.EGL_STENCIL_SIZE, 0);

                // We need at least mDepthSize and mStencilSize bits
                if (d < mDepthSize || s < mStencilSize)
                    continue;

                // We want an *exact* match for red/green/blue/alpha
                int r = findConfigAttrib(egl, display, config,
                        EGL10.EGL_RED_SIZE, 0);
                int g = findConfigAttrib(egl, display, config,
                            EGL10.EGL_GREEN_SIZE, 0);
                int b = findConfigAttrib(egl, display, config,
                            EGL10.EGL_BLUE_SIZE, 0);
                int a = findConfigAttrib(egl, display, config,
                        EGL10.EGL_ALPHA_SIZE, 0);

                if (r == mRedSize &&
                    g == mGreenSize &&
                    b == mBlueSize &&
                    a == mAlphaSize)
                    return config;
            }

            return null;
        }


        private int findConfigAttrib(
            EGL10 egl, EGLDisplay display, EGLConfig config, int attribute,
            int defaultValue)
        {

            if (egl.eglGetConfigAttrib(display, config, attribute, mValue))
                return mValue[0];

            return defaultValue;
        }


        // Subclasses can adjust these values:
        protected int mRedSize;
        protected int mGreenSize;
        protected int mBlueSize;
        protected int mAlphaSize;
        protected int mDepthSize;
        protected int mStencilSize;
        private int[] mValue = new int[1];
    }
}

首先在這Code當中,主要是用來Initialization GLView,也許是因為OpenGL ES版本差異?Initialization主要有三件事情:

  • 偵測並確認OpenGL ES版本
  • EGLContextFactory
  • EGLConfigChooser

根據尋找的結果是,在OpenGL當中,在進行Render(渲染?)需要先進行EGLContextFactoryEGLConfigChooser,什麼是Render?可以參考一下渲染-Wiki,主要就是:幾何、視點、紋理以及照明訊息。老實說,完全不明白…看樣子需要去學個電腦圖學會更清楚才對。說真的OpenGL的程式,真的跟天書一樣…後來了解到,這只是設定OpenGL ES所可能需要用到的東西,大概是因為需要進行3D的貼圖吧!所以才需要這些幾何、視點等資料,為了讓Render的物件更逼真!?後來看到主程式的Code大概也說明了這件事情。

ImageTargets.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
    private void initApplicationAR()
    {
        // Do application initialization in native code (e.g. registering
        // callbacks, etc.):
        initApplicationNative(mScreenWidth, mScreenHeight);

        // Create OpenGL ES view:
        int depthSize = 16;
        int stencilSize = 0;
        boolean translucent = QCAR.requiresAlpha();

        mGlView = new QCARSampleGLView(this);
        mGlView.init(mQCARFlags, translucent, depthSize, stencilSize);

        mRenderer = new ImageTargetsRenderer();
        mRenderer.mActivity = this;
        mGlView.setRenderer(mRenderer);

        LayoutInflater inflater = LayoutInflater.from(this);
        mUILayout = (RelativeLayout) inflater.inflate(R.layout.camera_overlay,
                null, false);

        mUILayout.setVisibility(View.VISIBLE);
        mUILayout.setBackgroundColor(Color.BLACK);

        // Gets a reference to the loading dialog
        mLoadingDialogContainer = mUILayout.findViewById(R.id.loading_indicator);

        // Shows the loading indicator at start
        loadingDialogHandler.sendEmptyMessage(SHOW_LOADING_DIALOG);

        // Adds the inflated layout to the view
        addContentView(mUILayout, new LayoutParams(LayoutParams.MATCH_PARENT,
                LayoutParams.MATCH_PARENT));
    }

還有另外一個Code專門處理Render,在這主要的initApplicationAR當中,先建立一個GLView然後是先前貼的QCARSampleGLView.java接著在initialization放入需要的參數。接著建立RendererCodeImageTargetsRenderer.java寫到這邊我突然很想要把這些Code直接複製就好了,感覺應該是要可以通用才對。總之,建立完GLView需要加入Renderer而在ImageTargetsRenderer.java當中,在繪圖的functionrenderFrame()這部分是Native code之後會再繼續補充。最後會把GLView加入到ContentView當中。 addContentView(mGlView, new LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT));

ImageTargetsRenderer.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
/*==============================================================================
            Copyright (c) 2010-2012 QUALCOMM Austria Research Center GmbH.
            All Rights Reserved.
            Qualcomm Confidential and Proprietary

@file
    ImageTargetsRenderer.java

@brief
    Sample for ImageTargets

==============================================================================*/


package com.qualcomm.QCARSamples.ImageTargets;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.opengl.GLSurfaceView;

import com.qualcomm.QCAR.QCAR;


/** The renderer class for the ImageTargets sample. */
public class ImageTargetsRenderer implements GLSurfaceView.Renderer
{
    public boolean mIsActive = false;

    /** Reference to main activity **/
    public ImageTargets mActivity;

    /** Native function for initializing the renderer. */
    public native void initRendering();

    /** Native function to update the renderer. */
    public native void updateRendering(int width, int height);


    /** Called when the surface is created or recreated. */
    public void onSurfaceCreated(GL10 gl, EGLConfig config)
    {
        DebugLog.LOGD("GLRenderer::onSurfaceCreated");

        // Call native function to initialize rendering:
        initRendering();

        // Call QCAR function to (re)initialize rendering after first use
        // or after OpenGL ES context was lost (e.g. after onPause/onResume):
        QCAR.onSurfaceCreated();
    }


    /** Called when the surface changed size. */
    public void onSurfaceChanged(GL10 gl, int width, int height)
    {
        DebugLog.LOGD("GLRenderer::onSurfaceChanged");

        // Call native function to update rendering when render surface
        // parameters have changed:
        updateRendering(width, height);

        // Call QCAR function to handle render surface size changes:
        QCAR.onSurfaceChanged(width, height);
    }


    /** The native render function. */
    public native void renderFrame();


    /** Called to draw the current frame. */
    public void onDrawFrame(GL10 gl)
    {
        if (!mIsActive)
            return;

        // Update render view (projection matrix and viewport) if needed:
        mActivity.updateRenderView();

        // Call our native function to render content
        renderFrame();
    }
}

最後我決定,我把這些現有的Code都先複製好了。感覺應該是可以重複使用才對,剩下的就是jni的部分了!讓問題變簡單一點。等之後對OpenGL有更新的認識時,就可以回來補充了!

Python: GUI Programming-II

| Comments

最近找了一些關於Google map應用的Python程式,除了加強自己的Python技巧也多參考一下別人怎樣寫。之後要來研究一下PIL:Python Image Library,安裝其實很簡單,只要用brew install pil就搞定,該怎樣用?之後會再來慢慢研究,這邊只是提到有這東西,不過好像許久沒有更新,看他網頁介紹說新版快要出了,也不知道多快XD。這次Python練習主題應該是要熟悉一下Object-Oriented怎樣在Python上面使用。雖然參考的書是Python3不過應該沒差吧!其實還有像是怎樣建立或是打包package(模組)感覺也很重要。這次打算延續上次寫的GUI,然後加入怎樣寫OO的程式。 主程式長成這樣,就弄個文字介面的就好。拿書本上的練習來打打字也沒有什麼特別,不過想要用其他寫好的模組,似乎只要import之後就可以用了。

ConsoleMain.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
'''
Created on 2013/2/3

@author: CYChiang
'''
import Shape
if __name__ == '__main__':
    a = Shape.Point()
    b = Shape.Point(3, 4)
    print("a = Shape.Point")
    print("repr(a) = ", repr(a))
    print("b = Shape.Point(3, 4)")
    print("repr(b) = ", repr(b))
    print("str(b) = ", str(b))
    print("b.distance_from_origin = ", b.distance_from_origin())
    b.x = -19
    print("b=-19, str(b) = ", str(b))
    if a == b:
        print("Point a equal b")
    if a != b:
        print("Point a not equal to b")

在class程式當中有很多OO的東西,看了一下書,好像是屬於Object類別裡面預設的東西。我嘗試把str拿掉,出來的結果還是跟有加入str寫的東西一樣,所以我覺得這應該是內建的method來協助class之間的運算,除了這裡面寫的這些,google一下好像也有很多OO的方法在Object裡面。

Shape.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
'''
Created on 2013/2/3

@author: CYChiang
'''
import math

class Point:
    def __init__(self, x=0, y=0):
        self.x = x
        self.y = y
    def distance_from_origin(self):
        return math.hypot(self.x, self.y)
    def __eq__(self, other):
        return self.x == other.x and self.y == other.y
    def __repr__(self):
        return "Point({0.x!r}, {0.y!r})".format(self)
    def __str__(self):
        return "({0.x!r}, {0.y!r})".format(self)

感覺好像也沒有什麼特別的,以上是class基本的實作,接下來弄個繼承。在繼承上練習的時候,書本程式碼竟然不會過…Google一下好像是什麼oldstyle的問題,暫時不太想理會,總之找到了另外一個繼承的方式。

Shape.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class Circle(Point):
    radius = 0
    def __init__(self, radius=0, x=0, y=0):
        #super().__init__(x, y)
        #super(Circle, self).__init__(x, y)
        Point.__init__(self, x, y)
        self.radius = radius
    def edge_distance_from_origin(self):
        return abs (self.distance_from_origin - self.radius)
    def area(self):
        return math.pi * (self.radius ** 2)
    def circumference(self):
        return 2 * math.pi * self.radius
    def __eq__(self, other):
        return self.radius == other.radius and super().__eq__(other)
    def __repr__(self):
        return "Circle({0.radius!r}, {0.x!r}, {0.y!r})".format(self)
    def __str__(self):
        return repr(self)

有問題的點就是在super().__init__(x, y)super(Circle, self).__init__(x, y)依照說明在class建立的時候會去呼叫init的內容,算建構式吧!那在super那裡應該是繼承Point裡面的建構x, y這兩個值才對。但是怎樣用在super().__init__(x, y)super(Circle, self).__init__(x, y)一定噴錯給你。最後發現要用Point.__init__(self, x, y)這應該是先呼叫Point的建構,總之,可行,以後再來慢慢想怎樣解決這問題。接下來的Code是加入數字檢查是不是為正和不為0,備忘一下。assert “條件”, "不成立輸出"

Shape.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
'''
Created on 2013/2/3

@author: CYChiang
'''
import math

class Point(object):
    def __init__(self, x=0, y=0):
        self.x = x
        self.y = y
    def distance_from_origin(self):
        return math.hypot(self.x, self.y)
    def __eq__(self, other):
        return self.x == other.x and self.y == other.y
    def __repr__(self):
        return "Point({0.x!r}, {0.y!r})".format(self)
    def __str__(self):
        return "({0.x!r}, {0.y!r})".format(self)
    @property
    def x(self):
        return self.__x
    @property
    def y(self):
        return self.__y
    @x.setter
    def x(self, x):
        assert x > 0, "must be nonzero and non-negative"
        self.__x = x
    @y.setter
    def y(self, y):
        assert y > 0, "must be nonzero and non-negative"
        self.__y=y


class Circle(Point):
    radius = 0
    def __init__(self, radius=0, x=0, y=0):
        #super().__init__(x, y)
        #super(Circle, self).__init__(x, y)
        Point.__init__(self, x, y)
        self.radius = radius
    @property
    def edge_distance_from_origin(self):
        return abs(self.distance_from_origin - self.radius)
    @property
    def area(self):
        return math.pi * (self.radius ** 2)
    def circumference(self):
        return 2 * math.pi * self.radius
    def __eq__(self, other):
        return self.radius == other.radius and super().__eq__(other)
    def __repr__(self):
        return "Circle({0.radius!r}, {0.x!r}, {0.y!r})".format(self)
    def __str__(self):
        return repr(self)
    @property
    def radius(self):
        return self.__radius
    @radius.setter
    def radius(self, radius):
        assert radius > 0, "must be nonzero and non-negative"
        self.__radius = radius

突然覺得如果加入一些有的沒得檢查,長度會變的有點長…不過也還好。以後再來想要怎樣精簡或是有沒有其他檢查的方式。之後想要來練習寫一下檔案處理之類的還有研究一下別人的code,看能不能寫出什麼比較有趣的東西。

Python: GUI Programming-I

| Comments

老是有很多想法,無奈時間不是很充足,後來想想,時間就像OO,所以我決定堅持到底,看能不能每天都寫點東西,除了持續一下這個Blog也持續寫Code練習練習,避免練習不足。

PyQtGuiMain.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
###
#Created on 2013/2/2

#@author: CYChiang
###
from PyQt4 import QtGui

class PyGui(QtGui.QWidget):
    def __init__(self, parent=None):
        super(PyGui, self).__init__(parent)
        # set component
        Label = QtGui.QLabel("The Label")
        button = QtGui.QPushButton("Click Me")
        lineEdit = QtGui.QLineEdit()
        # configure the layout
        mainLayout = QtGui.QGridLayout()
        mainLayout.addWidget(Label, 0, 0)
        mainLayout.addWidget(lineEdit, 0, 1)
        mainLayout.addWidget(button, 1, 0)
        # add event listener and handle
        button.clicked.connect(self.close)
        # set layout
        self.setLayout(mainLayout)

import sys
if __name__ == '__main__':
    app = QtGui.QApplication(sys.argv)
    MainGui = PyGui()
    # setGeometry(x_pos, y_pos, width, height)
    MainGui.setGeometry(400, 200, 800, 600)
    # resize(width, height)
    # MainGui.resize(400, 300)
    # Set the title of program
    MainGui.setWindowTitle('The PyQt GUI')
    # Set Icon
    # MainGui.setWindowIcon(QtGui.QIcon('lucia_2d.png'))
    MainGui.setToolTip('I am message.')
    # Set style
    MainGui.setStyleSheet('background: white')
    # Display 
    MainGui.show()
    # run and exit
    app.exit(app.exec_())

程式執行起來大概長成這樣。整個來說Python在用Qt寫GUI上就跟C/C++寫Qt差不多,很多方法都一樣的使用方式,那在這邊主要先練習一下QWidget有哪些功能以及要怎樣使用Qt的connect,這部分就和之前C/C++用法不太一樣,不知道是不是因為新版本的關係。

附上一點解說,同樣主要針對QWidget。後來發現除非對某些視窗需要特別的方法,不然似乎使用framework提供的方法就很夠用了。之後會慢慢的寫些東西補齊全,至於最後出現哪些東西,我也不知道。

1
2
3
4
5
MainGui.setGeometry(x, y, w, h)                   # 設定程式出現在螢幕的x, y地方以及程式的大小
MainGui.setWindowTitle("String"")                 # 這部分是指定我的視窗名稱要什麼
MainGui.setToolTip("String")                      # 這部分是當滑鼠放在程式當中空白部分,會出現哪些字
MainGui.setWindowIcon(QtGui.QIcon("path"”))       # 裡面可以指定Icon的小圖案
MainGui.setStyleSheet('background: white')        # 指定背景顏色,似乎可以套用個圖片之類的

至於其他的object有提供哪些方法?我想大部份和QWidget提供的差不多,我想。但是在排版上,可以透過一些layout來輔助,如果指定一些特別的位置在程式當中。希望每天可以持續下去!這樣的寫程式方式,當然另外一方面Python用的不是很熟,想要寫些額外的東西有點卡卡,就先熟悉然後繼續練習和強化自己。

Upgrade Python on Mac Os X

| Comments

提到Python,在mac os x底下似乎有預先安裝好Python 2.7.2,但是更新?就有點問題,首先它被埋在/usr/bin底下,然後好像又symbol link到其他怪怪的資料夾,所以花了很多時間在找怎樣安裝新的Python 2.7.3或是之後的Python 3.x卻又不用動到原本預先安裝好在麻煩的/usr/bin底下的python 2.7.2。最後發現了一個方便的方式,就是只要修改/etc/paths內容,調整一下尋找順序就可以了。

1
2
3
4
5
/usr/local/bin
/usr/bin
/bin
/usr/sbin
/sbin

因為我是透過brew來安裝新的python所以在安裝的時候,預設執行目錄會在/usr/local/bin但是同樣的python執行檔同時存在於/usr/bin,所以會依照出現順序的先後來決定吧,就是先找哪個path先找到先贏。似乎在這邊可以理解一下mac os x的尋找路徑方式,以下是原本沒修改前/etc/paths的搜尋順序。

1
2
3
4
5
/usr/bin
/bin
/usr/sbin
/sbin
/usr/local/bin

所以只要把brew所安裝的/usr/local/bin提到最前面,就可以了!

Octopress memo-I

| Comments

以前嘗試寫過幾次網誌,但是不是很熟悉HTML語法,加上對什麼CSS也不是很拿手,所以都用工具來幫忙,之前是用Windows Live Writer,用了幾次除了難用之外,常常發現排版和字體都和原本寫的不是很一致,加上圖片等等資料都很難搞定,最後乾脆不寫了!雖然在之前的網誌上面也沒有寫很多重要的事情。

最後在最近發現了有人介紹Octopress + Github寫網誌,這有點興趣,用了一下感覺也不錯,所以就重新開始了寫Blog之旅,在這邊主要就紀錄一下常用的指令。也可以參考Octopress官方網站,註解程式碼:參考這邊

1
2
3
rake new_post["post_title"]       # 建立新文章,會自動產生一個.markdown的文件
rake generate                     # 寫完文章之後,用這個指令產生網誌頁面
rake deploy                       # 把網誌發佈到github上面

安裝所需要的東西有:

  • Ruby
  • Python
  • git

似乎安裝完這些就可以讓他自動下載套件的樣子,網路上有些網誌可以找到安裝步驟,可以找找。不過有一點需要注意,似乎就是Ruby的版本,Octopress它支援1.9.2現在好像可以支援1.9.3。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#安裝rvm,Ruby Version Manager,參考這邊
https://rvm.io/                               
#安裝ruby 1.9.3
rvm install 1.9.3                             # 透過rvm安裝ruby 1.9.3
rvm 1.9.3 --default                           # 設定1.9.3為預設版本
ruby --version                                # 確認版本
#Note:依稀記得在我的Mac上,發生了安裝ruby 1.9.3出現問題,似乎pkg-config的路徑問題
                                              # 抓octopress,從github
git clone git://github.com/imathis/octopress.git octopress
cd octopress
gem install bundler
bundle install
rake install
#更新Octopress
git pull octopress master                     # 取得最後一版的octopress
bundle install                                # 更新gem
rake update_source                            # 更新source
rake update_style                             # 更新template

以上參考至這邊,By the way,因為先前對mac系統還不是很熟,很多套件都隨便裝沒有特別管理,最後發現很多套件都有管理套件統一來下載管理,像是Homebrew這就不錯用。寫到這邊就一直很想要重灌我的Macbook air,最後推薦這套Mou在Mac底下寫Markdown的編輯器。

JNI在Eclipse的使用

| Comments

JNI=Java Native Interface,會需要用到這個的原因在於Android的一些效能問題,另外一個原因就是使用vuforia的時候,大部份的Code都會需要用到JNI,可能也是因為效能問題吧!在這邊也是參考一些其他網路的資源然後做個備忘。基本上這個備忘主要是參考Learning Android Chapter 15. Native Development Kit,但是…裡面有些範例用的指令到我這邊都不能用,像是javah -jni [command]所以寫出來也備忘一下。整個程式其實也不會說很龐大,主要列出比較重要的部分。那在網站的範例當中是用計算費式數列的例子。而結果如圖所顯示這樣,主要比較JNI和Java下的執行效能,可以發現遞迴運算的影響頗大的。

現在開始說明程式碼的部分:

CalcFib.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
package com.esw.ndk.calc.fib;

public class CalcFib {
  //Java Fib implementation
  public static long JavaFibRecursive(long Num) {
      if (Num <= 0)
          return 0;
      if (Num == 1)
          return 1;
      return JavaFibRecursive(Num-1) + JavaFibRecursive(Num-2);
  }
  public static long JavaFibInterative(long Num) {
      long previous = -1;
      long result = 1;
      for (long i = 0; i <= Num; i++) {
          long sum = result + previous;
          previous = result;
          result = sum;
      }
      return result;
  }
  //Native Fib implementation
  static {
      System.loadLibrary("CalcFib");
  }
  public static native long NativeFibRecursive(long Num);
  public static native long NativeFibInterative(long Num);

}

其實也沒有什麼好說明,就一個java class裡面有四個function兩個Java兩個Native code,在這邊用兩種費式數列計算方式,一個遞迴另外一個是疊代。實驗的結果就像前面所講的。有關於疊代和遞迴這兩種方式的差異,可以Google看看。在這程式碼當中比較重要的是public static native …這段,這部分是宣告這邊是需要透過native的方式進行呼叫需要的Function,就像是在跟Java說,這部分我要丟給別人做一樣。

正常來說,都是可以透過javac xxx.java來轉成xxx.class,而我們所需要做的就是,透過javah [command]的指令來去把需要implement的native code自動轉成jni的header檔,不用自己去動手寫header,算是比較方便,尤其jni的檔一堆莫名的底線。那問題來了,在前面所貼的Blog當中,他用的方法是:

1
2
cd $(path_to_project)/bin            //移動到專案資料夾底下的bin資料夾
javah -jni packagename.classname  //執行javah指令來產生先前寫的native function的header

的方式來使用,但是在我這邊不能正常運作,後來試了其他方式,最後變成了:

1
2
cd $(path_to_project)/bin            //移動到專案資料夾底下的bin資料夾
javah -classpath $(path_to_project)/bin/classes packagename.classname

之後就可以在bin資料夾看到classname.h的header檔,在這邊我們的packagename=com.esw.ndk.calc.fib,classname=CalcFib,所以組合起來=com.esw.ndk.calc.fib.CalcFib。接著只要在eclipse的專案當中加入jni的資料夾,接著把header搬過去就可以開始寫native code。這是所自動產生的header檔

com_esw_ndk_calc_fib_CalcFib.h
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
/* DO NOT EDIT THIS FILE - it is machine generated */
#include <jni.h>
/* Header for class com_esw_ndk_calc_fib_CalcFib */

#ifndef _Included_com_esw_ndk_calc_fib_CalcFib
#define _Included_com_esw_ndk_calc_fib_CalcFib
#ifdef __cplusplus
extern "C" {
#endif
/*
 * Class:     com_esw_ndk_calc_fib_CalcFib
 * Method:    NativeFibRecursive
 * Signature: (J)J
 */
JNIEXPORT jlong JNICALL Java_com_esw_ndk_calc_fib_CalcFib_NativeFibRecursive
  (JNIEnv *, jclass, jlong);

/*
 * Class:     com_esw_ndk_calc_fib_CalcFib
 * Method:    NativeFibInterative
 * Signature: (J)J
 */
JNIEXPORT jlong JNICALL Java_com_esw_ndk_calc_fib_CalcFib_NativeFibInterative
  (JNIEnv *, jclass, jlong);

#ifdef __cplusplus
}
#endif
#endif

接著建立CalcFib.c的檔案

CalcFib.c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#include "com_esw_ndk_calc_fib_CalcFib.h"

long NativeFibRecursive(long Num) {
  if(Num <= 0)
      return 0;
  if(Num == 1)
      return 1;
  return NativeFibRecursive(Num-1) + NativeFibRecursive(Num-2);
}
long NativeFibInterative(long Num) {
  long previous = -1;
  long result = 1;
  long i = 0;
  int sum = 0;
  for (i=0; i <= Num; i++) {
      sum = result + previous;
      previous = result;
      result = sum;
  }
  return result;
}
JNIEXPORT jlong JNICALL Java_com_esw_ndk_calc_fib_CalcFib_NativeFibRecursive
  (JNIEnv *env, jclass obj, jlong Num) {
  return NativeFibRecursive(Num);
}

JNIEXPORT jlong JNICALL Java_com_esw_ndk_calc_fib_CalcFib_NativeFibInterative
  (JNIEnv *env, jclass obj, jlong Num) {
  return NativeFibInterative(Num);
}

除了這些東西之外,最重要的就是Android.mk了,Android底下的Makefile檔。

1
2
3
4
5
6
7
8
LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)

LOCAL_MODULE := CalcFib
LOCAL_SRC_FILES := CalcFib.c

include $(BUILD_SHARED_LIBRARY)

裡面的變數可以在Google找到,主要都是gcc的定義參數等,用來清理或是紀錄當前目錄和環境變數。最重要的一點是LOCAL_MODULE指的是這個Native library的名稱,經過NDK編譯完成之後,會在前面自動加入lib所以經過NDK編譯之後會變成libCalcFib.so這不用太理會,同樣在載入library時也不用刻意加入lib,所以在前面的Java Code當中用來載入Library的程式碼為其中的CalcFib就是LOCAL_MODULE所指定的名稱。

1
2
3
static {
  System.loadLibrary("CalcFib");
}

在設定完成之後,開始利用ndk編譯,在ndk的資料夾底下會有:

  • ndk-build
  • ndk-build.cmd

ndk-build這是給Mac/Linux用的,而ndk-build.cmd,這是給windows用的,所以如果有設定eclipse自動編譯的會需要注意,不要去include錯誤的ndk-build否則會吐一堆錯誤。接著只要

1
2
cd $(path_to_project)/jni
ndk-build

過程如果沒有意外,就可以產生*.so檔案。對於NDK教學最主要的就是這幾個地方,通常*.so都應該會被自動加入,在建立apk檔案的時候,有時候會發現程式執行的時候找不到ModuleName.so的情況,這很有可能是ndk-build出錯或是某些.so檔沒有找到而且放到專案裡面。