The Gig Economy

The nature of paid work is changing. Part-time jobs are increasing and full time jobs are on the decline. Temporary jobs like driving for Uber or Lyft are becoming popular. In this blog we discuss what constitutes a gig economy. Left unregulated the gig economy would only increase the disparity between the haves and have-nots. We discuss some ideas that can help reduce the disparity

Characteristics of a Gig Economy

Part-time jobs are slowly replacing full time jobs (Australian Bureau of Statistics, 2017). Individual workers bidding for jobs in Uber, Lyft, Freelancer.com, and serviceseeking.com.au are becoming popular. In a recent Crossroads magazine article Paolo Pailigi and Xia Ma(2017) refer to this working arrangement as a “Gig Economy” which is characterised by trust and short duration.

Trust comes in two forms. Personal trust between the supplier and provider of a service as in a Uber driver and a customer trusting each other. Trust on a platform essentially means that providers and customers use a platform to co-ordinate their interaction and both trust the platform to create a temporary contract that the two parties tacitly agree to. It also refers to the fairness of the rating system, screening out malicious users, and the like.

Service providers in a gig economy have a short term commitment to the employer. This paradoxically is advantageous to the customer as the gig economy commoditizes the gig. More than one plumber may bid for a job thus reducing the cost to the customer.

Challenges

The authors argue that the challenge is not to convert short term gigs into a stable employment, but to work towards a regular stream of gigs. The gig economy is inevitable claim the authors. However left to itself markets will tend to exploit the vulnerable. Free markets would have 14-year olds working 12 hours in coal mines. Hence it is necessary for the state to intervene to distribute the fruits of the economy and its laboour more evenly. Pailigi and Ma suggest three things that workers in the gig economy be afforded:

  • Training
  • Benefits
  • Encourage free markets

Training

In all good organisations, every employee undergoes sometime in training usually amounting to at least a week or two of working hours. Policy makers can this kind of training mandatory for all the service providers in the gig economy. In addition it can regulate the training market and provide tax concessions for undergoing such training.

Benefits

Except in the US, in most developed countries basic health care is provided by the state. In the US, health insurance is often subsidised by the employer. In addition private organisations provide other benefits such as easier loans, tie ups with hotels and car rentals, subsidised private education for children, extended maternity and paternity leave and so on. The “lone ranger” in the gig economy has to pay full price for everything. This is where the state can intervene. They can create institutions or assist existing institutions like trade associations to help their members avail of such benefits.

Free Markets

The third suggestion is to stop protecting the interests of special interest groups. Here, in New South Wales, Australia, taxi industry forced the government to limit the number of taxi licenses. The medical, accounting and legal professions are also guilty of this type of behaviour where people from “outside” the system have a huge entry barrier to climb. Allowing a free market to operate will reduce the cost to the consumers and help the best in the trade to rise. The authors may need to be reminded that this is more easily than done. Vested interests control most aspects of policy making including major decisions like going to war.

Platforms

Given that the gig economy is inevitable, Paigiri and Ma make a few suggestions as to how to make it work effectively. They suggest the signalling theory frame work to deal with two types of signals. Assessment signals are those that reflect the quality of characteristics of the underlying service or product. Conventional signals on the other hand reflect the quality through conventional means such as promises and qualitative assessments.

Assessment signals take time to build while conventional signals are not quite reliable. The platforms used for the sharing economy should distinguish the two types of signals. The authors suggest that such platforms should hide bias and prejudice. Unfortunately that is a people problem that cannot be solved with technology unless the state intervenes.

Objections

The authors address two objections to their proposals namely: (a) the gig economy entrenches people in poorly paid jobs, (b) resulting in a large disparity between the gig economy and the conventional economy of stable employment. They point to the industrial revolution which did displace many professions but in the long run people were better off. The usual response to the issue of disparity is that the size of the pie can be increased and hence despite the disparity most people will have a larger slice than before.

There are two main issues to this type of reasoning. In the first place, workers who are displaced must be provided for. Otherwise if the number of people who are not sure of their prospects increases, people will not make long term commitments be it in housing or training. Secondly the argument ignores externalities. The market economy encourages waste. If people don’t reduce consumption, the planet is heading to towards climate catastrophe. Resources like clean air and water which we now take for granted will dwindle and large scale wars for these resources will bring human civilisation to an end before the global warming can unleash its weapons of mass destruction.

Need for alternate solutions

Paigini and Ma identify trust and short term commitment as the main characteristics of the gig economy. They suggest training, access to training and protection from vested interests as means of reducing the exploitative nature of the gig economy. Left to its course free market will exploit labour and harm the environment as the cost of such externalities are borne either by the general public or by future generations who have no say in the matter. They welcome the gig economy which they claim is inevitable and address some issues that help reduce the exploitative nature of the gig economy. However they barely address the more serious problems. A better solution would be a sharing economy that shares not just natural resources but also employment opportunities.

“The world has enough for everyone’s need, but not enough for everyone’s greed.” Mahatma Gandhi

References

Australian Bureau of Statistics, 6202.0 – Labour Force, Australia, Jan 2017, http://www.abs.gov.au/ausstats/abs@.nsf/mf/6202.0 retrieved on 22/02/2107.

Paolo Parigi and Xiao Ma, The Gig Economy, Crossroads The ACM Magazine for Students, Winter 2016, vol.23, no.2

Posted in Uncategorized | Leave a comment

No Silver Bullets: A Modern Perspective

“No Silver Bullets” (Brooks 1986) is an oft-cited paper in the annals of Software Engineering. While technology has changed significantly since the paper was first published there are still some pearls of wisdom that are worth reflecting upon.

Tracing to Aristotle, Brooks breaks down the difficulties in solving a complex problem into two categories. Essential difficulties are those that are inherent to the problem. Accidental difficulties arise out of the inadequacy of the tools we employ to deal with them. The paper states that the four distinguishing features of a software engineering problem are:

  • Complexity,
  • Conformity,
  • Changeability and
  • Invisibility.

Complexity and conformity are inherent to any engineering problem, but it is more pronounced in software engineering. Conformity means that software has to adapt to existing hardware and infrastructure. Changeability refers to the fact that software is almost always modified after it is delivered and invisibility refers to lack of diagramming tools. Software is abstract. There is no diagramming standard like a floor plan or circuit diagram to document the design. Documenting and communicating software design remains a challenge and Brooks claims that advances in this area will play a significant role in addressing the essential complexity.

Brooks claims that most the tools are geared towards addressing the accidental difficulties. Some of these tools include better programming languages, better development environments, and better hardware and storage technologies. For example syntax directed highlighting is now the norm and almost all Integrated Development Environments or for that matter editors like Vim and Emacs provide that feature. He claims that Object Oriented Programming could help if languages could infer the types. He dismisses Artificial Intelligence as a non-solution although Expert Systems can be of use in narrow circumstances in that expert developers can help build code generators or code checkers to improve the productivity of the average programmer.

There have been attempts to deal with essential complexity such as designing new languages. Brooks point out that the long lasting benefit of Ada programming language is not so much the language itself as the many ideas it has spawned such as “modularization, abstract data types, and hierarchical structuring.” Many of these ideas have now been incorporated into other languages like C++/Java/C#. Brooks likes some of the ideas that Object Oriented Programming encourages such as data hiding and inheritance and points out that the two concepts are independent. For example ‘C’ always had FILE*. The exact type-definition of FILE was hidden from the programmer. Data hiding can thus be done in pure C. Brooks claims that programmer spends a lot of time keeping track of types and productivity can be improved if type is deduced. Functional Programming languages like Lisp and ML allowed the programmer, by and large, to ignore declaring the types. In recent times C++ and C# go a long way through the use of auto and var to reduce the need for having to declare the types of variables, by using the type of the value to which variable is initialised.

In conclusion, some of Brooks ideas are relevant to this day. Improving accidental complexity is easier and hence a lot of effort has been expended in those areas. The challenge of addressing the essential complexity is difficult partly because it requires a new way of thinking and any significant improvement is likely to be ignored unless a large institution or firm can support it.

References

Brooks, Fred P. (1986). No Silver Bullet — Essence and Accident in Software Engineering. Proceedings of the IFIP Tenth World Computing Conference: 1069–1076.

Posted in Uncategorized | Leave a comment

Protected Health Information incident at Massachusetts General Hospital

Introduction

22,000 pateints’ personal records including names, Social Security Numbers (SSN), and dates of birth were exposed at Massachusetts General Hospital (HIPAA Journal, 2016). In addition some appointment details were also exposed. The incident came to the attention of the hospital in February 2016. However, patients were notified of the breach in May 2016.

Such incidents appear to be occurring a little too often. Identity Theft Resource Center (2016) lists 572 such incidents for the calendar year 2016 as of August 6. Houston (2000) addressed the privacy issues dealing with storing and transmitting medical records over fifteen years back nor was the author the first one to do so going by the list of references cited. In this report we consider what took place and how the breach occurred. The technical fix will be obvious. The difficult part would be to convince people of the importance of the issue. We conclude by suggesting that software developers take security more seriously.

What happened

Patterson Dental Supply Inc., (PSDI) develops Eaglesoft a software for managing dental patient records. Massachusetts General Hospital contracted PSDI to host the data relating to their dental patients which PSDI did using Eaglesoft ( (HIPAA Journal, 2016) Sometime in February 2016, Justin Schafer logged on to one of PSDI’s anonymous FTP server and could potentially download 22,000 patient records (Goodin, 2016). Neither Schafer nor PSDI claim that the data was actually downloaded. (No harm done; let’s go home… Not so fast.)

What are the issues involved

There are two issues here. Why is the data important? One of the most common identification information in the US is the SSN. In fact the last four digits of the SSN is requested almost everywhere your identity needs to be confirmed. From the name it is possible to get the address in most cases. Armed with is information and a little social engineering it is possible to masquerade as somebody else on the social media. For example, Honan (2012) was locked out of his Twitter and Apple accounts using much less information. The attacker caused a lot of inconvenience because Honan now had to identify himself even more rigorously to get back his Apple account. The author had to register for a new Twitter account. Identity theft of this kind can lead to financial loss as well.

The next question is: Did PSDI do enough to secure the data? Consider its track record. In addition to this incident Schafer had reported earlier that PSDI had been using ‘dba/sql’ as user-name/password for “years and years and years” (HIPAA Journal para. 9). In this case the company attempted to ‘shoot the messenger’, by getting the Federal Bureau of Intelligence to treat Schafer like a dangerous criminal (Goodin, 2016). They seem to be practising security through obscurity.

How could the breach have been prevented

No damage was known to have been done because the breach was exposed by a security researcher. That merely means that any previous breaches if any went undetected. Houston (2001, p. 91) quoting Simpson (1996) claims that the majority of such breaches stem from internal sources. It is thus highly likely that people within the company knew of these anonymous FTP sites. The patients can only hope that internal sources were all honest and ethical. That companies are still using unsecure FTP use is cause for concern.

Technical measures to reduce such incidents.

Access to Personal Health Information must be controlled. Some of the common access control methods are authentication, audit logging, limited access privilege and firewalls (Pfeeger C., Pfeeger S. and Marguiles pp. 72-75). The need for authentication is obvious. Consider logging. Obviously logging every operation will not be practical as the log files will be so big that important details might get overlooked. The user name and the IP address of the machine used to connect to the information server are the minimum requirements. Intelligent firewalls can help. The IP address can be used to check if a third party is masquerading as a legitimate user because legitimate users usually login from the same IP address or a small set of IP addresses. Limited access privilege essentially means that specific users have access to specific sections of the data and the operations that can be performed. (Pfeeger et al. p. 75) For example, a receptionist booking appointments has no need to access diagnostic information.

Conclusion

Data breaches are becoming so common that unless millions of people are involved it is not taken seriously. Citizens must take more interest in the issue. As Houston (2001, p. 93) in his five point plan suggests, security must designed into the system. While modern programing languages and practices have taken care of insecure programming practices to a large extent, software engineers and system architects must be aware of the trade-offs between security and usability.

References

  • Goodin, D. (2016, 5 28). Armed FBI agents raid home of researcher who found unsecured patient data. Retrieved from arstechnica.com
  • HIPAA Journal. (2016, 6 30). Massachusetts General Hospital Reports PHI Incident. Retrieved from hipaajournal
  • Honan, M. (2012, 8 6). How Apple and Amazon Security Flaws Led to My Epic Hacking. Retrieved from www.wired.com
  • Houston, T. (2000). Security Issues for Impementation of E-Medical Records. Communications of the ACM, 89-94.
  • Identity Theft Resource Center. (2016, 8 2). Retrieved from www.idtheftcenter.org
  • Pfleeger, C., Pfleeger, S., & Marguiles, J. (2015). Security in Computing. Upper Saddle River: Prentice Hall.
  • Simpson, R. (1996, December). Security threats are an inside job. Nursing Management, 27(12), 43.
Posted in Uncategorized | Leave a comment

On Computing the Fibonacci Number in O(log(n))

Introduction

Liu Feng posted a number of Lisp functions to compute a Fibonacci number with O(log(n)) time complexity where n is the n-th number to be computed. The last such function involved tail recursion. While the function is correct there was no convincing proof of correctness. Essentially computing the n-th Fibonacci number was reduced to the computing \large {\bigl(\begin{smallmatrix} 0 & 1 \\ 1 & 1\end{smallmatrix}\bigr)^{n}}. Hence here we will consider the simpler numerical exponentiation problem and extension to computing the Fibonacci number follows automatically. We present a semi-formal proof of correctness and show how the program could be implemented in C++.

Numerical Exponentiation

Consider the problem of computing xN where x is a number and N a positive integer. A naive solution will require N-1 multiplications. However the solution described below, in C++ syntax, will compute the solution in O(log(N)) time:

    template<typename number>
    number power_iter(number x, int N, number p)
    {
      number a = x;
      unsigned int n = N;
 
      auto even = [](unsigned int n) -> bool
      { return n % 2 == 0; };
        //loop invariant x^N = a^n * p;
        while (n > 0)
        {
          if (even(n))
            a = a*a, n = n / 2;
          else
            p = a*p, n = n - 1;
        }
      return p;
    }
    long power(long x, unsigned int N)
    {
      return power_iter(x, N, 1L);
    }

Proof of Correctness

Notice that the loop-invariant xN = an * p is trivially established at the start of the first iteration. Using induction it can be shown that the invariant holds at the end of every iteration with the new values of a, n and p. Hence when n=0 we have xN = p, which is the desired value.

Performance

If N can be represented in binary as 10…1, then the algorithm proceeds by changing the least significant bit to zero if it is one or right shifting the number if the least significant bit is zero. Hence in the worst case there are at most 2*log2(N)-1 multiplications and in the best case log2(N) multiplications.

An Aside

If the ‘*’ operator is implemented as addition, then power operation becomes multiplication provided, the initial value of p is zero. This is also known as the Russian Peasant multiplication algorithm.

Fibonacci Number

Define a Fibonacci matrix as of the form

\Large{\bigl(\begin{smallmatrix} a & b \\ b & a+b\end{smallmatrix}\bigr)}.

Fibonacci matrices are closed under multiplication. In other words

\Large{\bigl(\begin{smallmatrix} a & b \\ b & a+b\end{smallmatrix}\bigr)*   \bigl(\begin{smallmatrix} p & q \\ q & p+q\end{smallmatrix}\bigr)= \bigl(\begin{smallmatrix} x & y \\ y & x+y\end{smallmatrix}\bigr)}.

where
x = a*p + b*q

and

y = a*q + b*(p+q) = p*b + q*(a+b).

Note that a Fibonacci matrix is commutative with regard to multiplication and that we require only two values to represent a Fibonacci matrix. We are now ready to implement the Fibonacci number computation:

    struct Fibonacci_Matrix
    {
      unsigned long a, b;
    };
    Fibonacci_Matrix operator*(Fibonacci_Matrix A, Fibonacci_Matrix B)
    {
      Fibonacci_Matrix X{ A.a*B.a + A.b*B.b, A.a*B.b + A.b*(B.a + B.b) };
      return X;
    }
    unsigned NthFibonacci(unsigned int N)
    {
      Fibonacci_Matrix one{ 0, 1 };
      if (N == 0) return 0;
      Fibonacci_Matrix result = power_iter(one, N - 1, one);
      return result.b;
    }

Notice that power_iter function remains the same; we just had to redefine multiplication for Fibonacci matrices to use power_iter. Note that power_iter relies on the multiplication operator being associative but not necessarily commutative, even though in ithis case it does not matter.

Posted in Algorithm, C++ | Leave a comment

A Quick Note Comparing Two Implementations of Neural Networks in C++

Background

MLPack-ANN and Tiny-cnn are two good implementations of neural networks. Their design philosophy is different. This blog compares the two implementations from the point of view of executing a neural network on an embedded device and concludes that although MLPack-ANN may be expected to be faster, the flexibility that Tiny-cnn offers makes it a better model.

Design

MLPack-ANN implements neural networks in C++ using tuples. A tuple is an immutable ordered collection of objects whose states may change. Since the layers of a neural network are ordered, a tuple is a good way to represent a neural network. In C++ a tuple is a template and thus MLPack-ANN uses static polymorphism to configure a neural network.

Speed

Tiny-cnn uses a decorator pattern (Gamma, Helm, Johnson & Vlissides 1995, pp 175-184), implemented using virtual functions (dynamic polymorphism) to achieve the same result. One would expect that static polymorphism would be faster, although I am not aware of any empirical data to make that claim in this case. However, given that most of the time is spent performing matrix multiplications, both in the learning phase and execution phase the difference in run times is unlikely to be significant. Based on my experience, I have reason to believe that Tiny-cnn is in fact faster for training on MNist data.

Flexibility

The use of templates means that the neural network model has to be decided at compile time. The use of dynamic polymorphism allows the model to be decided at run time. For example the number of layers can be set in a configuration file. This is important because one would train the network using various models on a large machine on the cloud perhaps, and then pass the result to a device that can use the given model. For example a device could react to the sound of a crying baby and ignore other sounds.

Tiny-cnn configures the distance metric and the optimisation function at compile time beacuse they are both template parameters. Changing from static polymorphism to dynamic polymorphism in this case should not be difficult

Conclusion

Although virtual functions involve some overhead, the flexibility it offers far outweighs the cost in this case, because it allows to the model to be changed significantly at runtme.

References

  1. Erich Gamma, Richard Helm, Ralph Johnson & John Vlissides IN Design Patterns: Elements of Reusable Object Oriented Software, Addison Wesley 1995
Posted in Uncategorized | Leave a comment

Git Collaboration Patterns

Motivation

Git is a powerful Distributed Version Control System (DVCS). While ad hoc methods will suffice for single person projects, group projects must follow conventions to avoid undoing the work of collaborators and to avoid redoing stuff that has already been done. This blog is based on Atlassian‘s description of four Git collaboration patterns.

  • Centralized Workflow model is suited for small (two or three person) projects which uses Git as just a remote store.
  • In the Feature Branch Workflow model the developer is assumed to produce unsafe code and hence merging with the master branch can be done only after due code review.
  • The third pattern invented by Vincent Driessen  is the Gitflow Workflow. It is somewhat like the Mainline Model (Wingerd 2005) as applied to a distributed version control.
  • The Forking Workflow model is suited for third parties contributing to an existing open source project by generating a pull request.

Centralized Workflow

Let us assume that we have a local Git project that is in synch with the remote version. We then make changes to the local version. Meanwhile other colloborators may be makeing other changes as well. When we decide to upload our changes we have to first merge the remote version with our local one before uploading ours to the remote site. The Git commands to do that are listed below:
$ git pull -rebase origin master
This might generate a conflict. Fix the conflict

$git add
$git rebase --continue

Fix additional conflict if any and repeat the two steps above until there are no conflicts. If at anytime you wish to abort the whole process:

$git rebase --abort

Once the project has been merged test it again and push it to the remote site

$git push origin master

 

Feature Branch Workflow

Consider a team with one person who has resposibility for the integrity of the code base. We’ll refer to the person as master. When a developer wishes to add a new feature he/she creates a new branch on the code base and when the feature is ready to integrates he/she informs the master who merges new feature with his local codebase and approves of the new features and then pushes it back to the central codebase for all to use.
The steps executed by the developer are:

$git init
$git remote add origin
$git pull -rebase origin master
$git checkout -b feature master
$git add
$git commit
$git push origin feature

The repo administrators then do a merge on their local machine and then push the master back to the repo..

GitFlow Workflow

In this model, there are two main trunks, the master trunk and the development trunk. A trunk is essentially a branch except that unlike other branches in this model it is long lived. Other branches can be deleted after their purpose in life has been served. The development trunk has many feature branches. It also has a release branch that contains the next version of the software. There are no new features only bug fixes in the release branch. The master trunk has many hotfix branches that contain only bug fixes to the master trunk. This trunk must ultimately merge with the release branch which must then merge with the development trunk. The following figure from Vincent Driessen (A successful Git branching model) explains it well.

 

Forking Workflow

Consider an open source project on Github. If a third party wants to contribute to it,

  • they would fork the project on their own repository.
  • Then on their local machine create a feature branch.
  • Once the implementation is completed
  • they would upload the branch to their own repository and
  • then generate a pull request from the open source repository from which they forked

The administrators of the open source project may at their discretion merge the pull request with the master. Now the third party can update their main branch. This ensures that the main branch is clean and the third party can now avail of other contributions made to the open source project.

Reference

  1. Laura Wingerd (November 2005) Practical Perforce, O’Reilly Media, Inc.
Posted in Uncategorized | Leave a comment

C++ Idioms to Handle Tupples

Consider the following code that will NOT compile

template< typename... Args>
void for_every(std::tuple<Args...> & t)
{
	const size_t n= sizeof...(Args);
	for(size_t i=0; i<n; ++i )
		SomeAction(std::get<i>(t));
}

This will not compile because ‘i’ needs to be known at compile time, but ‘i’ changes at runtime. There are a few ways of addressing this issue.

A Siimple Solution


One way [1] of dealing with this which I find simplest is shown below:

  template<size_t N>
  struct identity { enum {value=N};};
  
  template< typename... Args>
  struct ArchiveTuple
  {
    ArchiveTuple(Archive& out_, std::tuple<Args...>& t_) :
      t(t_),
      ar(out_)
    {}

    template<size_t N>
    void act(identity<N>)
    {
      const size_t k = sizeof...(Args)-N;
      Serialize(ar, std::get<k>(t));
      act(identity<N-1>());
    }


    void act(identity<0>)
    {}
  private:
    std::tuple<Args...>& t;
    Archive& ar;
  };

  template<typename Archive, typename... Args>
  void serialize(Archive& ar, std::tuple<Args...>& t)
  {
    ArchiveTuple<Archive, Args...> arch(ar, t);
    arch.act(identity<sizeof...(Args)>());
  }

The act function is called recursively for decreasing values of N until N becomes zero when the non-templated function is called. This exploits the rule that overload resolution puts non-templated functions higher up in the pecking order.

Template Specialisation


Another idiom[2] would be

template<size_t N>
struct Action
	{
	template< typename... Args>
	static void act(std::tuple<Args...> &t)
		{
        const size_t k = sizeof...(Args)-N;
		std::cout << std::get<k>(t) << std::endl;
		Action<N - 1>::act(t);
		}
	};

template<>
struct Action<0>
	{
	template< typename... Args>
	static void act(std::tuple<Args...> & t)
		{
		}
	};

A test sample using the above code is listed below.

int main()
{
	auto mytuple = std::make_tuple(
                0,
                std::string("Hello"), 
                100.8, 
                true);
	Action<std::tuple_size<decltype(mytuple)>::value>::act(mytuple);
    return 0;
}

Here template specialisation ensures that Action::act is called when N is zero, thus terminating the recursion. The question then arrises as to whether functions can be specialised as shown below:

  template<typename Archive, typename... Args>
  struct ArchiveTuple
  {
    ArchiveTuple(Archive& out_, std::tuple<Args...>& t_) :
      t(t_),
      ar(out_)
    {}

    template<size_t N>
    void act()
    {
      const size_t k = sizeof...(Args)-N;
      std::cout << std::get<k>(t);
      act<N - 1>();
    }


    template<>
    void act<0>()
    {}
  private:
    std::tuple<Args...>& t;

  };

  template<typename... Args>
  void serialize(std::tuple<Args...>& t, const unsigned int /* version */)
  {
    ArchiveTuple<Archive, Args...> arch( t);
    arch.act<sizeof...(Args)>();
  }
}

Although this compiles with the Visual C++ compiler, it is non-standard and GCC 4.9 does not accept it. “Explicitly specialized members need their surrounding class templates to be explicitly specialized as well.” [4] The difference between this and the first idiom is that here we are using template specialisation, whereas in the first one we use function overloading.

SFINAE


The most elegant solution which took me a while to appreciate is listed below.
MLPack’s neural network [3] components use this idiom:


  template<size_t I = 0, typename... Tp>
  typename std::enable_if<I < sizeof...(Tp), void>::type
  ResetParameter(std::tuple<Tp...>& network)
  {
    Reset(std::get<I>(network));
    ResetParameter<I + 1, Tp...>(network);
  }
  template<size_t I = 0, typename... Tp>
  typename std::enable_if<I == sizeof...(Tp), void>::type
  ResetParameter(std::tuple<Tp...>& /* unused */) 
  { /* Nothing to do here */ }

You would then call it using:

  ResetParameter(network);

where network is of type tuple.
Here there is no overload resolution. SFINAE (substituition failure is not an error) is used to create two distinct template functions and the empty function is called to terminate the recursion.

References

  1. A solution from stackoverflow whose source I am unable to retrace
  2. Emsr’s solution http://stackoverflow.com/questions/1198260/iterate-over-tuple
  3. http://www.mlpack.org as of Feb. 27, 2016
  4. Explicit specialization of template class member function
Posted in Uncategorized | Leave a comment